diff --git a/CHANGELOG.md b/CHANGELOG.md index 91ef4e214..d6fcfe297 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,329 @@ +## 3.7.0 + +This release introduces major new features and internal refactoring. It is an important step toward the 4.0 release planned soon, which will remove all deprecations introduced in 3.7. + +๐Ÿ›ฉ๏ธ _Features_ + +### ๐Ÿ”ฅ **Native Element Functions** + +A new [Els API](./els) for direct element interactions has been introduced. This API provides low-level element manipulation functions for more granular control over element interactions and assertions: + +- `element()` - perform custom operations on first matching element +- `eachElement()` - iterate and perform operations on each matching element +- `expectElement()` - assert condition on first matching element +- `expectAnyElement()` - assert condition matches at least one element +- `expectAllElements()` - assert condition matches all elements + +Example using all element functions: + +```js +const { element, eachElement, expectElement, expectAnyElement, expectAllElements } = require('codeceptjs/els') + +// ... + +Scenario('element functions demo', async ({ I }) => { + // Get attribute of first button + const attr = await element('.button', async el => await el.getAttribute('data-test')) + + // Log text of each list item + await eachElement('.list-item', async (el, idx) => { + console.log(`Item ${idx}: ${await el.getText()}`) + }) + + // Assert first submit button is enabled + await expectElement('.submit', async el => await el.isEnabled()) + + // Assert at least one product is in stock + await expectAnyElement('.product', async el => { + return (await el.getAttribute('data-status')) === 'in-stock' + }) + + // Assert all required fields have required attribute + await expectAllElements('.required', async el => { + return (await el.getAttribute('required')) !== null + }) +}) +``` + +[Els](./els) functions expose the native API of Playwright, WebDriver, and Puppeteer helpers. The actual `el` API will differ depending on which helper is used, which affects test code interoperability. + +### ๐Ÿ”ฎ **Effects introduced** + +[Effects](./effects) is a new concept that encompasses all functions that can modify scenario flow. These functions are now part of a single module. Previously, they were used via plugins like `tryTo` and `retryTo`. Now, it is recommended to import them directly: + +```js +const { tryTo, retryTo } = require('codeceptjs/effects') + +Scenario(..., ({ I }) => { + I.amOnPage('/') + // tryTo returns boolean if code in function fails + // use it to execute actions that may fail but not affect the test flow + // for instance, for accepting cookie banners + const isItWorking = tryTo(() => I.see('It works')) + + // run multiple steps and retry on failure + retryTo(() => { + I.click('Start Working!'); + I.see('It works') + }, 5); +}) +``` + +Previously `tryTo` and `retryTo` were available globally via plugins. This behavior is deprecated as of 3.7 and will be removed in 4.0. Import these functions via effects instead. Similarly, `within` will be moved to `effects` in 4.0. + +### โœ… `check` command added + +``` +npx codeceptjs check +``` + +This command can be executed locally or in CI environments to verify that tests can be executed correctly. + +It checks: + +- configuration +- tests +- helpers + +And will attempt to open and close a browser if a corresponding helper is enabled. If something goes wrong, the command will fail with a message. Run `npx codeceptjs check` on CI before actual tests to ensure everything is set up correctly and all services and browsers are accessible. + +For GitHub Actions, add this command: + +```yaml +steps: + # ... + - name: check configuration and browser + run: npx codeceptjs check + + - name: run codeceptjs tests + run: npx codeceptjs run-workers 4 +``` + +### ๐Ÿ‘จโ€๐Ÿ”ฌ **analyze plugin introduced** + +This [AI plugin](./plugins#analyze) analyzes failures in test runs and provides brief summaries. For more than 5 failures, it performs cluster analysis and aggregates failures into groups, attempting to find common causes. It is recommended to use Deepseek R1 model or OpenAI o3 for better reasoning on clustering: + +```js +โ€ข SUMMARY The test failed because the expected text "Sign in" was not found on the page, indicating a possible issue with HTML elements or their visibility. +โ€ข ERROR expected web application to include "Sign in" +โ€ข CATEGORY HTML / page elements (not found, not visible, etc) +โ€ข URL http://127.0.0.1:3000/users/sign_in +``` + +For fewer than 5 failures, they are analyzed individually. If a visual recognition model is connected, AI will also scan screenshots to suggest potential failure causes (missing button, missing text, etc). + +This plugin should be paired with the newly added [`pageInfo` plugin](./plugins/#pageInfo) which stores important information like URL, console logs, and error classes for further analysis. + +### ๐Ÿ‘จโ€๐Ÿ’ผ **autoLogin plugin** renamed to **auth plugin** + +[`auth`](/plugins#auth) is the new name for the autoLogin plugin and aims to solve common authorization issues. In 3.7 it can use Playwright's storage state to load authorization cookies in a browser on start. So if a user is already authorized, a browser session starts with cookies already loaded for this user. If you use Playwright, you can enable this behavior using the `loginAs` method inside a `BeforeSuite` hook: + +```js +BeforeSuite(({ loginAs }) => loginAs('user')) +``` + +The previous behavior where `loginAs` was called from a `Before` hook also works. However, cookie loading and authorization checking is performed after the browser starts. + +#### Metadata introduced + +Meta information in key-value format can be attached to Scenarios to provide more context when reporting tests: + +```js +// add Jira issue to scenario +Scenario('...', () => { + // ... +}).meta('JIRA', 'TST-123') + +// or pass meta info in the beginning of scenario: +Scenario('my test linked to Jira', meta: { issue: 'TST-123' }, () => { + // ... +}) +``` + +By default, Playwright helpers add browser and window size as meta information to tests. + +### ๐Ÿ‘ข Custom Steps API + +Custom Steps or Sections API introduced to group steps into sections: + +```js +const { Section } = require('codeceptjs/steps'); + +Scenario({ I } => { + I.amOnPage('/projects'); + + // start section "Create project" + Section('Create a project'); + I.click('Create'); + I.fillField('title', 'Project 123') + I.click('Save') + I.see('Project created') + // calling Section with empty param closes previous section + Section() + + // previous section automatically closes + // when new section starts + Section('open project') + // ... +}); +``` + +To hide steps inside a section from output use `Section().hidden()` call: + +```js +Section('Create a project').hidden() +// next steps are not printed: +I.click('Create') +I.fillField('title', 'Project 123') +Section() +``` + +Alternative syntax for closing section: `EndSection`: + +```js +const { Section, EndSection } = require('codeceptjs/steps'); + +// ... +Scenario(..., ({ I }) => // ... + + Section('Create a project').hidden() + // next steps are not printed: + I.click('Create'); + I.fillField('title', 'Project 123') + EndSection() +``` + +Also available BDD-style pre-defined sections: + +```js +const { Given, When, Then } = require('codeceptjs/steps'); + +// ... +Scenario(..., ({ I }) => // ... + + Given('I have a project') + // next steps are not printed: + I.click('Create'); + I.fillField('title', 'Project 123') + + When('I open project'); + // ... + + Then('I should see analytics in a project') + //.... +``` + +### ๐Ÿฅพ Step Options + +Better syntax to set general step options for specific tests. + +Use it to set timeout or retries for specific steps: + +```js +const step = require('codeceptjs/steps'); + +Scenario(..., ({ I }) => // ... + I.click('Create', step.timeout(10).retry(2)); + //.... +``` + +Alternative syntax: + +```js +const { stepTimeout, stepRetry } = require('codeceptjs/steps'); + +Scenario(..., ({ I }) => // ... + I.click('Create', stepTimeout(10)); + I.see('Created', stepRetry(2)); + //.... +``` + +This change deprecates previous syntax: + +- `I.limitTime().act(...)` => replaced with `I.act(..., stepTimeout())` +- `I.retry().act(...)` => replaced with `I.act(..., stepRetry())` + +Step options should be passed as the very last argument to `I.action()` call. + +Step options can be used to pass additional options to currently existing methods: + +```js +const { stepOpts } = require('codeceptjs/steps') + +I.see('SIGN IN', stepOpts({ ignoreCase: true })) +``` + +Currently this works only on `see` and only with `ignoreCase` param. +However, this syntax will be extended in next versions. + +### Test object can be injected into Scenario + +API for direct access to test object inside Scenario or hooks to add metadata or artifacts: + +```js +BeforeSuite(({ suite }) => { + // no test object here, test is not created yet +}) + +Before(({ test }) => { + // add artifact to test + test.artifacts.myScreenshot = 'screenshot' +}) + +Scenario('test store-test-and-suite test', ({ test }) => { + // add custom meta data + test.meta.browser = 'chrome' +}) + +After(({ test }) => {}) +``` + +Object for `suite` is also injected for all Scenario and hooks. + +### Notable changes + +- Load official Gherkin translations into CodeceptJS. See #4784 by @ebo-zig +- ๐Ÿ‡ณ๐Ÿ‡ฑ `NL` translation introduced by @ebo-zig in #4784: +- [Playwright] Improved experience to highlight and print elements in debug mode +- `codeceptjs run` fails on CI if no tests were executed. This helps to avoid false positive checks. Use `DONT_FAIL_ON_EMPTY_RUN` env variable to disable this behavior +- Various console output improvements +- AI suggested fixes from `heal` plugin (which heals failing tests on the fly) shown in `run-workers` command +- `plugin/standatdActingHelpers` replaced with `Container.STANDARD_ACTING_HELPERS` + +### ๐Ÿ› _Bug Fixes_ + +- Fixed timeouts for `BeforeSuite` and `AfterSuite` +- Fixed stucking process on session switch + +### ๐ŸŽ‡ Internal Refactoring + +This section is listed briefly. A new dedicated page for internal API concepts will be added to documentation + +- File structure changed: + - mocha classes moved to `lib/mocha` + - step is split to multiple classes and moved to `lib/step` +- Extended and exposed to public API classes for Test, Suite, Hook + - [Test](https://github.com/codeceptjs/CodeceptJS/blob/3.x/lib/mocha/test.js) + - [Suite](https://github.com/codeceptjs/CodeceptJS/blob/3.x/lib/mocha/suite.js) + - [Hook](https://github.com/codeceptjs/CodeceptJS/blob/3.x/lib/mocha/hooks.js) (Before, After, BeforeSuite, AfterSuite) +- Container: + - refactored to be prepared for async imports in ESM. + - added proxy classes to resolve circular dependencies +- Step + - added different step types [`HelperStep`](https://github.com/codeceptjs/CodeceptJS/blob/3.x/lib/step/helper.js), [`MetaStep`](https://github.com/codeceptjs/CodeceptJS/blob/3.x/lib/step/meta.js), [`FuncStep`](https://github.com/codeceptjs/CodeceptJS/blob/3.x/lib/step/func.js), [`CommentStep`](https://github.com/codeceptjs/CodeceptJS/blob/3.x/lib/step/comment.js) + - added `step.addToRecorder()` to schedule test execution as part of global promise chain +- [Result object](https://github.com/codeceptjs/CodeceptJS/blob/3.x/lib/result.js) added + - `event.all.result` now sends Result object with all failures and stats included +- `run-workers` refactored to use `Result` to send results from workers to main process +- Timeouts refactored `listener/timeout` => [`globalTimeout`](https://github.com/codeceptjs/CodeceptJS/blob/3.x/lib/listener/globalTimeout.js) +- Reduced usages of global variables, more attributes added to [`store`](https://github.com/codeceptjs/CodeceptJS/blob/3.x/lib/store.js) to share data on current state between different parts of system +- `events` API improved + - Hook class is sent as param for `event.hook.passed`, `event.hook.finished` + - `event.test.failed`, `event.test.finished` always sends Test. If test has failed in `Before` or `BeforeSuite` hook, event for all failed test in this suite will be sent + - if a test has failed in a hook, a hook name is sent as 3rd arg to `event.test.failed` + +--- + ## 3.6.10 โค๏ธ Thanks all to those who contributed to make this release! โค๏ธ @@ -2442,7 +2768,7 @@ Read changelog to learn more about version ๐Ÿ‘‡ ```ts const psp = wd.grabPageScrollPosition() // $ExpectType Promise -psp.then((result) => { +psp.then(result => { result.x // $ExpectType number result.y // $ExpectType number }) @@ -3365,7 +3691,7 @@ This change allows using auto-completion when running a specific test. - [WebDriverIO][Protractor][Multiple Sessions](https://codecept.io/acceptance/#multiple-sessions). Run several browser sessions in one test. Introduced `session` command, which opens additional browser window and closes it after a test. ```js -Scenario('run in different browsers', (I) => { +Scenario('run in different browsers', I => { I.amOnPage('/hello') I.see('Hello!') session('john', () => { @@ -3407,13 +3733,13 @@ locate('//table').find('tr').at(2).find('a').withText('Edit') ```js Feature('checkout').timeout(3000).retry(2) -Scenario('user can order in firefox', (I) => { +Scenario('user can order in firefox', I => { // see dynamic configuration }) .config({ browser: 'firefox' }) .timeout(20000) -Scenario('this test should throw error', (I) => { +Scenario('this test should throw error', I => { // I.amOnPage }).throws(new Error()) ``` @@ -3522,7 +3848,7 @@ I.retry({ retries: 3, maxTimeout: 3000 }).see('Hello') // retry 2 times if error with message 'Node not visible' happens I.retry({ retries: 2, - when: (err) => err.message === 'Node not visible', + when: err => err.message === 'Node not visible', }).seeElement('#user') ``` @@ -3550,7 +3876,7 @@ I.retry({ ```js I.runOnAndroid( - (caps) => caps.platformVersion >= 7, + caps => caps.platformVersion >= 7, () => { // run code only on Android 7+ }, @@ -3959,7 +4285,7 @@ I.say('I expect post is visible on site') ```js Feature('Complex JS Stuff', { retries: 3 }) -Scenario('Not that complex', { retries: 1 }, (I) => { +Scenario('Not that complex', { retries: 1 }, I => { // test goes here }) ``` @@ -3969,7 +4295,7 @@ Scenario('Not that complex', { retries: 1 }, (I) => { ```js Feature('Complex JS Stuff', { timeout: 5000 }) -Scenario('Not that complex', { timeout: 1000 }, (I) => { +Scenario('Not that complex', { timeout: 1000 }, I => { // test goes here }) ``` diff --git a/docs/ai.md b/docs/ai.md index 2a0b5ffb8..83f050b7c 100644 --- a/docs/ai.md +++ b/docs/ai.md @@ -85,7 +85,7 @@ ai: { const openai = new OpenAI({ apiKey: process.env['OPENAI_API_KEY'] }) const completion = await openai.chat.completions.create({ - model: 'gpt-3.5-turbo-0125', + model: 'gpt-3.5-turbo', messages, }) @@ -354,6 +354,131 @@ npx codeceptjs run --ai When execution finishes, you will receive information on token usage and code suggestions proposed by AI. By evaluating this information you will be able to check how effective AI can be for your case. +## Analyze Results + +When running tests with AI enabled, CodeceptJS can automatically analyze test failures and provide insights. The analyze plugin helps identify patterns in test failures and provides detailed explanations of what went wrong. + +Enable the analyze plugin in your config: + +```js +plugins: { + analyze: { + enabled: true, + // analyze up to 3 failures in detail + analyze: 3, + // group similar failures when 5 or more tests fail + clusterize: 5, + // enable screenshot analysis (requires modal that can analyze screenshots) + vision: false + } +} +``` + +When tests are executed with `--ai` flag, the analyze plugin will: + +**Analyze Individual Failures**: For each failed test (up to the `analyze` limit), it will: + +- Examine the error message and stack trace +- Review the test steps that led to the failure +- Provide a detailed explanation of what likely caused the failure +- Suggest possible fixes and improvements + +Sample Analysis report: + +When analyzing individual failures (less than `clusterize` threshold), the output looks like this: + +``` +๐Ÿช„ AI REPORT: +-------------------------------- +โ†’ Cannot submit registration form with invalid email ๐Ÿ‘€ + +* SUMMARY: Form submission failed due to invalid email format, system correctly shows validation message +* ERROR: expected element ".success-message" to be visible, but it is not present in DOM +* CATEGORY: Data errors (password incorrect, no options in select, invalid format, etc) +* STEPS: I.fillField('#email', 'invalid-email'); I.click('Submit'); I.see('.success-message') +* URL: /register + +``` + +> The ๐Ÿ‘€ emoji indicates that screenshot analysis was performed (when `vision: true`). + +**Cluster Similar Failures**: When number of failures exceeds the `clusterize` threshold: + +- Groups failures with similar error patterns +- Identifies common root causes +- Suggests fixes that could resolve multiple failures +- Helps prioritize which issues to tackle first + +**Categorize Failures**: Automatically classifies failures into categories like: + +- Browser/connection issues +- Network errors +- Element locator problems +- Navigation errors +- Code errors +- Data validation issues +- etc. + +Clusterization output: + +``` +๐Ÿช„ AI REPORT: +_______________________________ + +## Group 1 ๐Ÿ” + +* SUMMARY: Element locator failures across login flow +* CATEGORY: HTML / page elements (not found, not visible, etc) +* ERROR: Element "#login-button" is not visible +* STEP: I.click('#login-button') +* SUITE: Authentication +* TAG: @login +* AFFECTED TESTS (4): + x Cannot login with valid credentials + x Should show error on invalid login + x Login button should be disabled when form empty + x Should redirect to dashboard after login + +## Group 2 ๐ŸŒ + +* SUMMARY: API timeout issues during user data fetch +* CATEGORY: Network errors (server error, timeout, etc) +* URL: /api/v1/users +* ERROR: Request failed with status code 504, Gateway Timeout +* SUITE: User Management +* AFFECTED TESTS (3): + x Should load user profile data + x Should display user settings + x Should fetch user notifications + +## Group 3 โš ๏ธ + +* SUMMARY: Form validation errors on registration page +* CATEGORY: Data errors (password incorrect, no options in select, invalid format, etc) +* ERROR: Expected field "password" to have error "Must be at least 8 characters" +* STEP: I.see('Must be at least 8 characters', '.error-message') +* SUITE: User Registration +* TAG: @registration +* AFFECTED TESTS (2): + x Should validate password requirements + x Should show all validation errors on submit +``` + +If `vision: true` is enabled and your tests take screenshots on failure, the plugin will also analyze screenshots to provide additional visual context about the failure. + +The analysis helps teams: + +- Quickly understand the root cause of failures +- Identify patterns in failing tests +- Prioritize fixes based on impact +- Maintain more stable test suites + +Run tests with both AI and analyze enabled: + +```bash +npx codeceptjs run --ai +``` + ## Arbitrary Prompts What if you want to take AI on the journey of test automation and ask it questions while browsing pages? diff --git a/docs/effects.md b/docs/effects.md new file mode 100644 index 000000000..bf6d39a2d --- /dev/null +++ b/docs/effects.md @@ -0,0 +1,101 @@ +# Effects + +Effects are functions that can modify scenario flow. They provide ways to handle conditional steps, retries, and test flow control. + +## Installation + +Effects can be imported directly from CodeceptJS: + +```js +const { tryTo, retryTo, within } = require('codeceptjs/effects') +``` + +> ๐Ÿ“ Note: Prior to v3.7, `tryTo` and `retryTo` were available globally via plugins. This behavior is deprecated and will be removed in v4.0. + +## tryTo + +The `tryTo` effect allows you to attempt steps that may fail without stopping test execution. It's useful for handling optional steps or conditions that aren't critical for the test flow. + +```js +const { tryTo } = require('codeceptjs/effects') + +// inside a test +const success = await tryTo(() => { + // These steps may fail but won't stop the test + I.see('Cookie banner') + I.click('Accept cookies') +}) + +if (!success) { + I.say('Cookie banner was not found') +} +``` + +If the steps inside `tryTo` fail: + +- The test will continue execution +- The failure will be logged in debug output +- `tryTo` returns `false` +- Auto-retries are disabled inside `tryTo` blocks + +## retryTo + +The `retryTo` effect allows you to retry a set of steps multiple times until they succeed. This is useful for handling flaky elements or conditions that may need multiple attempts. + +```js +const { retryTo } = require('codeceptjs/effects') + +// Retry up to 5 times with 200ms between attempts +await retryTo(() => { + I.switchTo('#editor-frame') + I.fillField('textarea', 'Hello world') +}, 5) +``` + +Parameters: + +- `callback` - Function containing steps to retry +- `maxTries` - Maximum number of retry attempts +- `pollInterval` - (optional) Delay between retries in milliseconds (default: 200ms) + +The callback receives the current retry count as an argument: + +```js +const { retryTo } = require('codeceptjs/effects') + +// inside a test... +await retryTo(tries => { + I.say(`Attempt ${tries}`) + I.click('Submit') + I.see('Success') +}, 3) +``` + +## within + +The `within` effect allows you to perform multiple steps within a specific context (like an iframe or modal): + +```js +const { within } = require('codeceptjs/effects') + +// inside a test... + +within('.modal', () => { + I.see('Modal title') + I.click('Close') +}) +``` + +## Usage with TypeScript + +Effects are fully typed and work well with TypeScript: + +```ts +import { tryTo, retryTo, within } from 'codeceptjs/effects' + +const success = await tryTo(async () => { + await I.see('Element') +}) +``` + +This documentation covers the main effects functionality while providing practical examples and important notes about deprecation and future changes. Let me know if you'd like me to expand any section or add more examples! diff --git a/docs/plugins.md b/docs/plugins.md index ec4bbc604..3d3a82313 100644 --- a/docs/plugins.md +++ b/docs/plugins.md @@ -5,13 +5,50 @@ sidebar: auto title: Plugins --- -## analyze### Parameters* `config` **any** (optional, default `{}`)## authLogs user in for the first test and reuses session for next tests. +## analyzeCodeceptJS Analyze Plugin - Uses AI to analyze test failures and provide insightsThis plugin analyzes failed tests using AI to provide detailed explanations and group similar failures. -Works by saving cookies into memory or file. -If a session expires automatically logs in again.> For better development experience cookies can be saved into file, so a session can be reused while writing tests.#### Usage1. Enable this plugin and configure as described below 2. Define user session names (example: `user`, `editor`, `admin`, etc). 3. Define how users are logged in and how to check that user is logged in 4. Use `login` object inside your tests to log in:```js -// inside a test file -// use login to inject auto-login function -Feature('Login'); +When enabled with --ai flag, it generates reports after test execution.#### Usage`js +// in codecept.conf.js +exports.config = { + plugins: { + analyze: { + enabled: true, + clusterize: 5, + analyze: 2, + vision: false + } + } +} +`#### Configuration\* `clusterize` (number) - minimum number of failures to trigger clustering analysis. Default: 5 + +- `analyze` (number) - maximum number of individual test failures to analyze in detail. Default: 2 +- `vision` (boolean) - enables visual analysis of test screenshots. Default: false +- `categories` (array) - list of failure categories for classification. Defaults to: + - Browser connection error / browser crash + - Network errors (server error, timeout, etc) + - HTML / page elements (not found, not visible, etc) + - Navigation errors (404, etc) + - Code errors (syntax error, JS errors, etc) + - Library & framework errors + - Data errors (password incorrect, invalid format, etc) + - Assertion failures + - Other errors +- `prompts` (object) - customize AI prompts for analysis + - `clusterize` - prompt for clustering analysis + - `analyze` - prompt for individual test analysis#### Features\* Groups similar failures when number of failures >= clusterize value +- Provides detailed analysis of individual failures +- Analyzes screenshots if vision=true and screenshots are available +- Classifies failures into predefined categories +- Suggests possible causes and solutions### Parameters\* `config` **[Object][1]** Plugin configuration (optional, default `{}`)Returns **void** ## authLogs user in for the first test and reuses session for next tests. + Works by saving cookies into memory or file. + If a session expires automatically logs in again.> For better development experience cookies can be saved into file, so a session can be reused while writing tests.#### Usage1. Enable this plugin and configure as described below + +2. Define user session names (example: `user`, `editor`, `admin`, etc). +3. Define how users are logged in and how to check that user is logged in +4. Use `login` object inside your tests to log in:```js + // inside a test file + // use login to inject auto-login function + Feature('Login'); Before(({ login }) => { login('user'); // login using user session @@ -298,11 +335,11 @@ plugins: { outputDir: 'output/coverage' } } -```Possible config options, More could be found at [monocart-coverage-reports][1]* `debug`: debug info. By default, false. +```Possible config options, More could be found at [monocart-coverage-reports][2]* `debug`: debug info. By default, false. * `name`: coverage report name. * `outputDir`: path to coverage report. * `sourceFilter`: filter the source files. -* `sourcePath`: option to resolve a custom path.### Parameters* `config` ## customLocatorCreates a [custom locator][2] by using special attributes in HTML.If you have a convention to use `data-test-id` or `data-qa` attributes to mark active elements for e2e tests, +* `sourcePath`: option to resolve a custom path.### Parameters* `config` ## customLocatorCreates a [custom locator][3] by using special attributes in HTML.If you have a convention to use `data-test-id` or `data-qa` attributes to mark active elements for e2e tests, you can enable this plugin to simplify matching elements with these attributes:```js // replace this: I.click({ css: '[data-test-id=register_button]'); @@ -380,13 +417,13 @@ await eachElement('check all items are visible', '.item', async (el) => { assert(await el.isVisible()); }); ```This method works with WebDriver, Playwright, Puppeteer, Appium helpers.Function parameter `el` represents a matched element. -Depending on a helper API of `el` can be different. Refer to API of corresponding browser testing engine for a complete API list:* [Playwright ElementHandle][3] -* [Puppeteer][4] -* [webdriverio element][5]#### Configuration* `registerGlobal` - to register `eachElement` function globally, true by defaultIf `registerGlobal` is false you can use eachElement from the plugin:```js +Depending on a helper API of `el` can be different. Refer to API of corresponding browser testing engine for a complete API list:* [Playwright ElementHandle][4] +* [Puppeteer][5] +* [webdriverio element][6]#### Configuration* `registerGlobal` - to register `eachElement` function globally, true by defaultIf `registerGlobal` is false you can use eachElement from the plugin:```js const eachElement = codeceptjs.container.plugins('eachElement'); -```### Parameters* `purpose` **[string][6]** +```### Parameters* `purpose` **[string][7]** * `locator` **CodeceptJS.LocatorOrString** -* `fn` **[Function][7]** Returns **([Promise][8]\ | [undefined][9])** ## fakerTransformUse the `@faker-js/faker` package to generate fake data inside examples on your gherkin tests#### UsageTo start please install `@faker-js/faker` package npm install -D @faker-js/faker yarn add -D @faker-js/fakerAdd this plugin to config file:```js +* `fn` **[Function][8]** Returns **([Promise][9]\ | [undefined][10])** ## fakerTransformUse the `@faker-js/faker` package to generate fake data inside examples on your gherkin tests#### UsageTo start please install `@faker-js/faker` package npm install -D @faker-js/faker yarn add -D @faker-js/fakerAdd this plugin to config file:```js plugins: { fakerTransform: { enabled: true @@ -400,7 +437,7 @@ Scenario Outline: ... Examples: | productName | customer | email | anythingMore | | {{commerce.product}} | Dr. {{name.findName}} | {{internet.email}} | staticData | -```### Parameters* `config` ## healSelf-healing tests with AI.Read more about heaking in [Self-Healing Tests][10]```js +```### Parameters* `config` ## healSelf-healing tests with AI.Read more about heaking in [Self-Healing Tests][11]```js plugins: { heal: { enabled: true, @@ -414,7 +451,7 @@ plugins: { enabled: true, } ```Additional config options:* `errorClasses` - list of classes to search for errors (default: `['error', 'warning', 'alert', 'danger']`) -* `browserLogs` - list of types of errors to search for in browser logs (default: `['error']`)### Parameters* `config` (optional, default `{}`)## pauseOnFailAutomatically launches [interactive pause][11] when a test fails.Useful for debugging flaky tests on local environment. +* `browserLogs` - list of types of errors to search for in browser logs (default: `['error']`)### Parameters* `config` (optional, default `{}`)## pauseOnFailAutomatically launches [interactive pause][12] when a test fails.Useful for debugging flaky tests on local environment. Add this plugin to config file:```js plugins: { pauseOnFail: {}, @@ -462,8 +499,8 @@ plugins: { } } ```Possible config options:* `uniqueScreenshotNames`: use unique names for screenshot. Default: false. -* `fullPageScreenshots`: make full page screenshots. Default: false.### Parameters* `config` ## selenoid[Selenoid][12] plugin automatically starts browsers and video recording. -Works with WebDriver helper.### PrerequisiteThis plugin **requires Docker** to be installed.> If you have issues starting Selenoid with this plugin consider using the official [Configuration Manager][13] tool from Selenoid### UsageSelenoid plugin can be started in two ways:1. **Automatic** - this plugin will create and manage selenoid container for you. +* `fullPageScreenshots`: make full page screenshots. Default: false.### Parameters* `config` ## selenoid[Selenoid][13] plugin automatically starts browsers and video recording. +Works with WebDriver helper.### PrerequisiteThis plugin **requires Docker** to be installed.> If you have issues starting Selenoid with this plugin consider using the official [Configuration Manager][14] tool from Selenoid### UsageSelenoid plugin can be started in two ways:1. **Automatic** - this plugin will create and manage selenoid container for you. 2. **Manual** - you create the conatainer and configure it with a plugin (recommended).#### AutomaticIf you are new to Selenoid and you want plug and play setup use automatic mode.Add plugin configuration in `codecept.conf.js`:```js plugins: { selenoid: { @@ -476,10 +513,10 @@ plugins: { enableLog: true, }, } -```When `autoCreate` is enabled it will pull the [latest Selenoid from DockerHub][14] and start Selenoid automatically. +```When `autoCreate` is enabled it will pull the [latest Selenoid from DockerHub][15] and start Selenoid automatically. It will also create `browsers.json` file required by Selenoid.In automatic mode the latest version of browser will be used for tests. It is recommended to specify exact version of each browser inside `browsers.json` file.> **If you are using Windows machine or if `autoCreate` does not work properly, create container manually**#### ManualWhile this plugin can create containers for you for better control it is recommended to create and launch containers manually. -This is especially useful for Continous Integration server as you can configure scaling for Selenoid containers.> Use [Selenoid Configuration Manager][13] to create and start containers semi-automatically.1. Create `browsers.json` file in the same directory `codecept.conf.js` is located - [Refer to Selenoid documentation][15] to know more about browsers.json.*Sample browsers.json*```js +This is especially useful for Continous Integration server as you can configure scaling for Selenoid containers.> Use [Selenoid Configuration Manager][14] to create and start containers semi-automatically.1. Create `browsers.json` file in the same directory `codecept.conf.js` is located + [Refer to Selenoid documentation][16] to know more about browsers.json.*Sample browsers.json*```js { "chrome": { "default": "latest", @@ -492,7 +529,7 @@ This is especially useful for Continous Integration server as you can configure } } } -```> It is recommended to use specific versions of browsers in `browsers.json` instead of latest. This will prevent tests fail when browsers will be updated.**โš  At first launch selenoid plugin takes extra time to download all Docker images before tests starts**.2. Create Selenoid containerRun the following command to create a container. To know more [refer here][16]```bash +```> It is recommended to use specific versions of browsers in `browsers.json` instead of latest. This will prevent tests fail when browsers will be updated.**โš  At first launch selenoid plugin takes extra time to download all Docker images before tests starts**.2. Create Selenoid containerRun the following command to create a container. To know more [refer here][17]```bash docker create \ --name selenoid \ -p 4444:4444 \ @@ -511,7 +548,7 @@ To save space videos for all succesful tests are deleted. This can be changed by | enableVideo | Enable video recording and use `video` folder of output (default: false) | | enableLog | Enable log recording and use `logs` folder of output (default: false) | | deletePassed | Delete video and logs of passed tests (default : true) | -| additionalParams | example: `additionalParams: '--env TEST=test'` [Refer here][17] to know more |### Parameters* `config` ## stepByStepReport![step-by-step-report][18]Generates step by step report for a test. +| additionalParams | example: `additionalParams: '--env TEST=test'` [Refer here][18] to know more |### Parameters* `config` ## stepByStepReport![step-by-step-report][19]Generates step by step report for a test. After each step in a test a screenshot is created. After test executed screenshots are combined into slideshow. By default, reports are generated only for failed tests.Run tests with plugin enabled: npx codeceptjs run --plugins stepByStepReport#### Configuration```js "plugins": { @@ -568,9 +605,9 @@ plugins: { * sauce * testingbot * browserstack -* appiumA complete list of all available services can be found on [webdriverio website][19].#### Setup1. Install a webdriverio service +* appiumA complete list of all available services can be found on [webdriverio website][20].#### Setup1. Install a webdriverio service 2. Enable `wdio` plugin in config -3. Add service name to `services` array inside wdio plugin config.See examples below:#### Selenium Standalone ServiceInstall ` @wdio/selenium-standalone-service` package, as [described here][20]. +3. Add service name to `services` array inside wdio plugin config.See examples below:#### Selenium Standalone ServiceInstall ` @wdio/selenium-standalone-service` package, as [described here][21]. It is important to make sure it is compatible with current webdriverio version.Enable `wdio` plugin in plugins list and add `selenium-standalone` service:```js plugins: { wdio: { @@ -579,7 +616,7 @@ plugins: { // additional config for service can be passed here } } -```#### Sauce ServiceInstall `@wdio/sauce-service` package, as [described here][21]. +```#### Sauce ServiceInstall `@wdio/sauce-service` package, as [described here][22]. It is important to make sure it is compatible with current webdriverio version.Enable `wdio` plugin in plugins list and add `sauce` service:```js plugins: { wdio: { @@ -591,5 +628,5 @@ plugins: { } } ```***In the same manner additional services from webdriverio can be installed, enabled, and configured.#### Configuration* `services` - list of enabled services -* ... - additional configuration passed into services.### Parameters* `config` [1]: https://github.com/cenfun/monocart-coverage-reports?tab=readme-ov-file#default-options[2]: https://codecept.io/locators#custom-locators[3]: https://playwright.dev/docs/api/class-elementhandle[4]: https://pptr.dev/#?product=Puppeteer&show=api-class-elementhandle[5]: https://webdriver.io/docs/api[6]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String[7]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Statements/function[8]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Promise[9]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/undefined[10]: https://codecept.io/heal/[11]: /basics/#pause[12]: https://aerokube.com/selenoid/[13]: https://aerokube.com/cm/latest/[14]: https://hub.docker.com/u/selenoid[15]: https://aerokube.com/selenoid/latest/#_prepare_configuration[16]: https://aerokube.com/selenoid/latest/#_option_2_start_selenoid_container[17]: https://docs.docker.com/engine/reference/commandline/create/[18]: https://codecept.io/img/codeceptjs-slideshow.gif[19]: https://webdriver.io[20]: https://webdriver.io/docs/selenium-standalone-service.html[21]: https://webdriver.io/docs/sauce-service.html +* ... - additional configuration passed into services.### Parameters* `config` [1]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Object[2]: https://github.com/cenfun/monocart-coverage-reports?tab=readme-ov-file#default-options[3]: https://codecept.io/locators#custom-locators[4]: https://playwright.dev/docs/api/class-elementhandle[5]: https://pptr.dev/#?product=Puppeteer&show=api-class-elementhandle[6]: https://webdriver.io/docs/api[7]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String[8]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Statements/function[9]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Promise[10]: https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/undefined[11]: https://codecept.io/heal/[12]: /basics/#pause[13]: https://aerokube.com/selenoid/[14]: https://aerokube.com/cm/latest/[15]: https://hub.docker.com/u/selenoid[16]: https://aerokube.com/selenoid/latest/#_prepare_configuration[17]: https://aerokube.com/selenoid/latest/#_option_2_start_selenoid_container[18]: https://docs.docker.com/engine/reference/commandline/create/[19]: https://codecept.io/img/codeceptjs-slideshow.gif[20]: https://webdriver.io[21]: https://webdriver.io/docs/selenium-standalone-service.html[22]: https://webdriver.io/docs/sauce-service.html ``````` diff --git a/lib/event.js b/lib/event.js index 4dd500d8f..1791f742f 100644 --- a/lib/event.js +++ b/lib/event.js @@ -54,6 +54,8 @@ module.exports = { * @inner * @property {'hook.start'} started * @property {'hook.passed'} passed + * @property {'hook.failed'} failed + * @property {'hook.finished'} finished */ hook: { started: 'hook.start', diff --git a/lib/plugin/analyze.js b/lib/plugin/analyze.js index 77897bafc..2f7526ea4 100644 --- a/lib/plugin/analyze.js +++ b/lib/plugin/analyze.js @@ -60,7 +60,7 @@ const defaultConfig = { If there is no groups of tests, say: "No patterns found" Preserve error messages but cut them if they are too long. - Respond clearly and directly, without introductory words or phrases like โ€˜Of course,โ€™ โ€˜Here is the answer,โ€™ etc. + Respond clearly and directly, without introductory words or phrases like 'Of course,' 'Here is the answer,' etc. Do not list more than 3 errors in the group. If you identify that all tests in the group have the same tag, add this tag to the group report, otherwise ignore TAG section. If you identify that all tests in the group have the same suite, add this suite to the group report, otherwise ignore SUITE section. @@ -160,9 +160,56 @@ const defaultConfig = { } /** + * CodeceptJS Analyze Plugin - Uses AI to analyze test failures and provide insights * - * @param {*} config - * @returns + * This plugin analyzes failed tests using AI to provide detailed explanations and group similar failures. + * When enabled with --ai flag, it generates reports after test execution. + * + * #### Usage + * + * ```js + * // in codecept.conf.js + * exports.config = { + * plugins: { + * analyze: { + * enabled: true, + * clusterize: 5, + * analyze: 2, + * vision: false + * } + * } + * } + * ``` + * + * #### Configuration + * + * * `clusterize` (number) - minimum number of failures to trigger clustering analysis. Default: 5 + * * `analyze` (number) - maximum number of individual test failures to analyze in detail. Default: 2 + * * `vision` (boolean) - enables visual analysis of test screenshots. Default: false + * * `categories` (array) - list of failure categories for classification. Defaults to: + * - Browser connection error / browser crash + * - Network errors (server error, timeout, etc) + * - HTML / page elements (not found, not visible, etc) + * - Navigation errors (404, etc) + * - Code errors (syntax error, JS errors, etc) + * - Library & framework errors + * - Data errors (password incorrect, invalid format, etc) + * - Assertion failures + * - Other errors + * * `prompts` (object) - customize AI prompts for analysis + * - `clusterize` - prompt for clustering analysis + * - `analyze` - prompt for individual test analysis + * + * #### Features + * + * * Groups similar failures when number of failures >= clusterize value + * * Provides detailed analysis of individual failures + * * Analyzes screenshots if vision=true and screenshots are available + * * Classifies failures into predefined categories + * * Suggests possible causes and solutions + * + * @param {Object} config - Plugin configuration + * @returns {void} */ module.exports = function (config = {}) { config = Object.assign(defaultConfig, config) diff --git a/package.json b/package.json index 49cefa192..0b5e6ce80 100644 --- a/package.json +++ b/package.json @@ -1,6 +1,6 @@ { "name": "codeceptjs", - "version": "3.7.0-beta.19", + "version": "3.7.0", "description": "Supercharged End 2 End Testing Framework for NodeJS", "keywords": [ "acceptance",