mirror of
https://github.com/microsoft/playwright.git
synced 2025-06-26 21:40:17 +00:00
docs: update test advanced guides (#10861)
Linking to the API reference, using better examples and newer docs.
This commit is contained in:
parent
59ea11cf16
commit
308c7b4e32
@ -7,132 +7,65 @@ title: "Advanced: configuration"
|
||||
|
||||
## Configuration object
|
||||
|
||||
Configuration file exports a single object.
|
||||
Configuration file exports a single [TestConfig] object. See [TestConfig] properties for available configuration options.
|
||||
|
||||
### Test suite options
|
||||
Note that each [test project](#projects) can provide its own [options][TestProject], for example two projects can run different tests by providing different `testDir`s.
|
||||
|
||||
These options define your test suite:
|
||||
- `metadata: any` - Any JSON-serializable metadata that will be put directly to the test report.
|
||||
- `name: string` - Project name, useful when defining multiple [test projects](#projects).
|
||||
- `outputDir: string` - Output directory for files created during the test run.
|
||||
- `snapshotDir: string` - Base output directory for snapshot files.
|
||||
- `repeatEach: number` - The number of times to repeat each test, useful for debugging flaky tests.
|
||||
- `retries: number` - The maximum number of retry attempts given to failed tests. If not specified, failing tests are not retried.
|
||||
- `testDir: string` - Directory that will be recursively scanned for test files.
|
||||
- `testIgnore: string | RegExp | (string | RegExp)[]` - Files matching one of these patterns are not considered test files.
|
||||
- `testMatch: string | RegExp | (string | RegExp)[]` - Only the files matching one of these patterns are considered test files.
|
||||
- `timeout: number` - Timeout for each test in milliseconds.
|
||||
- `use` - An object defining fixture options.
|
||||
|
||||
### Test run options
|
||||
|
||||
These options would be typically different between local development and CI operation:
|
||||
- `forbidOnly: boolean` - Whether to exit with an error if any tests are marked as `test.only`. Useful on CI.
|
||||
- `globalSetup: string` - Path to the [global setup](#global-setup-and-teardown) file. This file will be required and run before all the tests. It must export a single function.
|
||||
- `globalTeardown: string` - Path to the [global teardown](#global-setup-and-teardown) file. This file will be required and run after all the tests. It must export a single function.
|
||||
- `globalTimeout: number` - Total timeout in milliseconds for the whole test run.
|
||||
- `grep: RegExp | RegExp[]` - Patterns to filter tests based on their title.
|
||||
- `maxFailures: number` - The maximum number of test failures for this test run. After reaching this number, testing will stop and exit with an error. Setting to zero (default) disables this behavior.
|
||||
- `preserveOutput: 'always' | 'never' | 'failures-only'` - Whether to preserve test output in the `outputDir`:
|
||||
- `'always'` - preserve output for all tests;
|
||||
- `'never'` - do not preserve output for any tests;
|
||||
- `'failures-only'` - only preserve output for failed tests.
|
||||
- `projects: Project[]` - Multiple [projects](#projects) configuration.
|
||||
- `quiet: boolean` - Whether to suppress stdout and stderr from the tests.
|
||||
- `reporter: 'list' | 'line' | 'dot' | 'json' | 'junit' | 'github' | 'html' | 'null'` - The reporter to use. See [reporters](./test-reporters.md) for details.
|
||||
- `reportSlowTests: { max: number, threshold: number } | null` - Whether to report slow test files. When `null`, slow test files are not reported. Otherwise, test files that took more than `threshold` milliseconds are reported as slow, but no more than `max` number of them. Passing zero as `max` reports all test files that exceed the threshold.
|
||||
- `shard: { total: number, current: number } | null` - [Shard](./test-parallel.md#shard-tests-between-multiple-machines) information.
|
||||
- `updateSnapshots: boolean` - Whether to update expected snapshots with the actual results produced by the test run.
|
||||
- `workers: number` - The maximum number of concurrent worker processes to use for parallelizing tests.
|
||||
- `webServer: { command: string, port: number, timeout?: number, reuseExistingServer?: boolean, cwd?: string, env?: object }` - Launch a process and wait that it's ready before the tests will start. See [launch web server](#launching-a-development-web-server-during-the-tests) configuration for examples.
|
||||
|
||||
Note that each [test project](#projects) can provide its own test suite options, for example two projects can run different tests by providing different `testDir`s. However, test run options are shared between all projects.
|
||||
|
||||
## workerInfo object
|
||||
|
||||
Depending on the configuration and failures, Playwright Test might use different number of worker processes to run all the tests. For example, Playwright Test will always start a new worker process after a failing test.
|
||||
|
||||
Worker-scoped fixtures and `beforeAll` and `afterAll` hooks receive `workerInfo` parameter. The following information is accessible from the `workerInfo`:
|
||||
- `config` - [Configuration object](#configuration-object).
|
||||
- `project` - Specific [project](#projects) configuration for this worker. Different projects are always run in separate processes.
|
||||
- `workerIndex: number` - A unique sequential index assigned to the worker process.
|
||||
|
||||
Consider an example where we run a new http server per worker process, and use `workerIndex` to produce a unique port number:
|
||||
|
||||
```js js-flavor=js
|
||||
// my-test.js
|
||||
const base = require('@playwright/test');
|
||||
const http = require('http');
|
||||
|
||||
// Note how we mark the fixture as { scope: 'worker' }.
|
||||
// Also note that we pass empty {} first, since we do not declare any test fixtures.
|
||||
exports.test = base.test.extend({
|
||||
server: [ async ({}, use, workerInfo) => {
|
||||
// Start the server.
|
||||
const server = http.createServer();
|
||||
server.listen(9000 + workerInfo.workerIndex);
|
||||
await new Promise(ready => server.once('listening', ready));
|
||||
|
||||
// Use the server in the tests.
|
||||
await use(server);
|
||||
|
||||
// Cleanup.
|
||||
await new Promise(done => server.close(done));
|
||||
}, { scope: 'worker' } ]
|
||||
});
|
||||
```
|
||||
Here is an example that defines a common timeout and two projects. The "Smoke" project runs a small subset of tests without retries, and "Default" project runs all other tests with retries.
|
||||
|
||||
```js js-flavor=ts
|
||||
// my-test.ts
|
||||
import { test as base } from '@playwright/test';
|
||||
import * as http from 'http';
|
||||
|
||||
// Note how we mark the fixture as { scope: 'worker' }.
|
||||
// Also note that we pass empty {} first, since we do not declare any test fixtures.
|
||||
export const test = base.extend<{}, { server: http.Server }>({
|
||||
server: [ async ({}, use, workerInfo) => {
|
||||
// Start the server.
|
||||
const server = http.createServer();
|
||||
server.listen(9000 + workerInfo.workerIndex);
|
||||
await new Promise(ready => server.once('listening', ready));
|
||||
|
||||
// Use the server in the tests.
|
||||
await use(server);
|
||||
|
||||
// Cleanup.
|
||||
await new Promise(done => server.close(done));
|
||||
}, { scope: 'worker' } ]
|
||||
});
|
||||
// playwright.config.ts
|
||||
import { PlaywrightTestConfig } from '@playwright/test';
|
||||
const config: PlaywrightTestConfig = {
|
||||
timeout: 60000, // Timeout is shared between all tests.
|
||||
projects: [
|
||||
{
|
||||
name: 'Smoke',
|
||||
testMatch: /.*smoke.spec.ts/,
|
||||
retries: 0,
|
||||
},
|
||||
{
|
||||
name: 'Default',
|
||||
testIgnore: /.*smoke.spec.ts/,
|
||||
retries: 2,
|
||||
},
|
||||
],
|
||||
};
|
||||
export default config;
|
||||
```
|
||||
|
||||
## testInfo object
|
||||
```js js-flavor=js
|
||||
// playwright.config.js
|
||||
// @ts-check
|
||||
/** @type {import('@playwright/test').PlaywrightTestConfig} */
|
||||
const config = {
|
||||
timeout: 60000, // Timeout is shared between all tests.
|
||||
projects: [
|
||||
{
|
||||
name: 'Smoke',
|
||||
testMatch: /.*smoke.spec.ts/,
|
||||
retries: 0,
|
||||
},
|
||||
{
|
||||
name: 'Default',
|
||||
testIgnore: /.*smoke.spec.ts/,
|
||||
retries: 2,
|
||||
},
|
||||
],
|
||||
};
|
||||
module.exports = config;
|
||||
```
|
||||
|
||||
Test fixtures and `beforeEach` and `afterEach` hooks receive `testInfo` parameter. It is also available to the test function as a second parameter.
|
||||
## TestInfo object
|
||||
|
||||
In addition to everything from the [`workerInfo`](#workerinfo), the following information is accessible before and during the test:
|
||||
- `title: string` - Test title.
|
||||
- `file: string` - Full path to the test file.
|
||||
- `line: number` - Line number of the test declaration.
|
||||
- `column: number` - Column number of the test declaration.
|
||||
- `fn: Function` - Test body function.
|
||||
- `repeatEachIndex: number` - The sequential repeat index.
|
||||
- `retry: number` - The sequential number of the test retry (zero means first run).
|
||||
- `expectedStatus: 'passed' | 'failed' | 'timedOut'` - Whether this test is expected to pass, fail or timeout.
|
||||
- `timeout: number` - Test timeout.
|
||||
- `annotations` - [Annotations](./test-annotations.md) that were added to the test.
|
||||
- `snapshotSuffix: string` - Suffix used to locate snapshots for the test.
|
||||
- `snapshotPath(...pathSegments: string[])` - Function that returns the full path to a particular snapshot for the test.
|
||||
- `outputDir: string` - Path to the output directory for this test run.
|
||||
- `outputPath(...pathSegments: string[])` - Function that returns the full path to a particular output artifact for the test.
|
||||
Test functions, fixtures and hooks receive a [TestInfo] parameter that provides information about the currently running test as well as some useful utilities that include:
|
||||
- Information about the test, for example `title`, `config` and `project`.
|
||||
- Information about test execution, for example `expectedStatus` and `status`.
|
||||
- Test artifact utilities, for example `outputPath()` and `attach()`.
|
||||
|
||||
The following information is accessible after the test body has finished, in fixture teardown:
|
||||
- `duration: number` - test running time in milliseconds.
|
||||
- `status: 'passed' | 'failed' | 'timedOut'` - the actual test result.
|
||||
- `error` - any error thrown by the test body.
|
||||
- `stdout: (string | Buffer)[]` - array of stdout chunks collected during the test run.
|
||||
- `stderr: (string | Buffer)[]` - array of stderr chunks collected during the test run.
|
||||
See [TestInfo] methods and properties for all available information and utilities.
|
||||
|
||||
Here is an example test that saves some information:
|
||||
Here is an example test that saves information to a file using [TestInfo].
|
||||
```js js-flavor=js
|
||||
// example.spec.js
|
||||
const { test } = require('@playwright/test');
|
||||
@ -157,7 +90,7 @@ test('my test needs a file', async ({ table }, testInfo) => {
|
||||
});
|
||||
```
|
||||
|
||||
Here is an example fixture that automatically saves debug logs when the test fails:
|
||||
Here is an example fixture that automatically saves debug logs when the test fails.
|
||||
```js js-flavor=js
|
||||
// my-test.js
|
||||
const debug = require('debug');
|
||||
@ -365,103 +298,32 @@ test('test', async ({ page }) => {
|
||||
|
||||
## Projects
|
||||
|
||||
Playwright Test supports running multiple test projects at the same time. This is useful for running the same tests in multiple configurations. For example, consider running tests against multiple versions of some REST backend.
|
||||
Playwright Test supports running multiple test projects at the same time. This is useful for running the same or different tests in multiple configurations.
|
||||
|
||||
In the following example, we will declare an option for the backend version, and a fixture that uses the option, and we'll be configuring two projects that test against different versions.
|
||||
### Same tests, different configuration
|
||||
|
||||
```js js-flavor=js
|
||||
// my-test.js
|
||||
const base = require('@playwright/test');
|
||||
const { startBackend } = require('./my-backend');
|
||||
|
||||
exports.test = base.test.extend({
|
||||
// Define an option and provide a default value.
|
||||
// We can later override it in the config.
|
||||
version: ['1.0', { option: true }],
|
||||
|
||||
// Use version when starting the backend.
|
||||
backendURL: async ({ version }, use) => {
|
||||
const app = await startBackend(version);
|
||||
await use(app.baseUrl());
|
||||
await app.close();
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
```js js-flavor=ts
|
||||
// my-test.ts
|
||||
import { test as base } from '@playwright/test';
|
||||
import { startBackend } from './my-backend';
|
||||
|
||||
export type TestOptions = { version: string };
|
||||
type TestFixtures = { backendURL: string };
|
||||
|
||||
export const test = base.extend<TestOptions & TestFixtures>({
|
||||
// Define an option and provide a default value.
|
||||
// We can later override it in the config.
|
||||
version: ['1.0', { option: true }],
|
||||
|
||||
// Use version when starting the backend.
|
||||
backendURL: async ({ version }, use) => {
|
||||
const app = await startBackend(version);
|
||||
await use(app.baseUrl());
|
||||
await app.close();
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
We can use our fixture and/or option in the test.
|
||||
```js js-flavor=js
|
||||
// example.spec.js
|
||||
const { test } = require('./my-test');
|
||||
|
||||
test('test 1', async ({ page, backendURL }) => {
|
||||
await page.goto(`${backendURL}/index.html`);
|
||||
// ...
|
||||
});
|
||||
|
||||
test('test 2', async ({ version, page, backendURL }) => {
|
||||
test.fixme(version === '2.0', 'This feature is not implemented in 2.0 yet');
|
||||
|
||||
await page.goto(`${backendURL}/index.html`);
|
||||
// ...
|
||||
});
|
||||
```
|
||||
|
||||
```js js-flavor=ts
|
||||
// example.spec.ts
|
||||
import { test } from './my-test';
|
||||
|
||||
test('test 1', async ({ page, backendURL }) => {
|
||||
await page.goto(`${backendURL}/index.html`);
|
||||
// ...
|
||||
});
|
||||
|
||||
test('test 2', async ({ version, page, backendURL }) => {
|
||||
test.fixme(version === '2.0', 'This feature is not implemented in 2.0 yet');
|
||||
|
||||
await page.goto(`${backendURL}/index.html`);
|
||||
// ...
|
||||
});
|
||||
```
|
||||
|
||||
Now, we can run tests in multiple configurations by using projects.
|
||||
Here is an example that runs the same tests in different browsers:
|
||||
```js js-flavor=js
|
||||
// playwright.config.js
|
||||
// @ts-check
|
||||
const { devices } = require('@playwright/test');
|
||||
|
||||
/** @type {import('@playwright/test').PlaywrightTestConfig<{ version: string }>} */
|
||||
/** @type {import('@playwright/test').PlaywrightTestConfig} */
|
||||
const config = {
|
||||
projects: [
|
||||
{
|
||||
name: 'v1',
|
||||
use: { version: '1.0' },
|
||||
name: 'chromium',
|
||||
use: { ...devices['Desktop Chrome'] },
|
||||
},
|
||||
{
|
||||
name: 'v2',
|
||||
use: { version: '2.0' },
|
||||
name: 'firefox',
|
||||
use: { ...devices['Desktop Firefox'] },
|
||||
},
|
||||
]
|
||||
{
|
||||
name: 'webkit',
|
||||
use: { ...devices['Desktop Safari'] },
|
||||
},
|
||||
],
|
||||
};
|
||||
|
||||
module.exports = config;
|
||||
@ -469,40 +331,151 @@ module.exports = config;
|
||||
|
||||
```js js-flavor=ts
|
||||
// playwright.config.ts
|
||||
import { PlaywrightTestConfig } from '@playwright/test';
|
||||
import { TestOptions } from './my-test';
|
||||
import { PlaywrightTestConfig, devices } from '@playwright/test';
|
||||
|
||||
const config: PlaywrightTestConfig<TestOptions> = {
|
||||
const config: PlaywrightTestConfig = {
|
||||
projects: [
|
||||
{
|
||||
name: 'v1',
|
||||
use: { version: '1.0' },
|
||||
name: 'chromium',
|
||||
use: { ...devices['Desktop Chrome'] },
|
||||
},
|
||||
{
|
||||
name: 'v2',
|
||||
use: { version: '2.0' },
|
||||
name: 'firefox',
|
||||
use: { ...devices['Desktop Firefox'] },
|
||||
},
|
||||
]
|
||||
{
|
||||
name: 'webkit',
|
||||
use: { ...devices['Desktop Safari'] },
|
||||
},
|
||||
],
|
||||
};
|
||||
export default config;
|
||||
```
|
||||
|
||||
Each project can be configured separately, and run different set of tests with different of [built-in][TestProject] and custom options.
|
||||
|
||||
You can run all projects or just a single one:
|
||||
```bash
|
||||
# Run both projects - each test will be run twice
|
||||
# Run both projects - each test will be run three times
|
||||
npx playwright test
|
||||
|
||||
# Run a single project - each test will be run once
|
||||
npx playwright test --project=v2
|
||||
npx playwright test --project=chromium
|
||||
```
|
||||
|
||||
There are many more things you can do with projects:
|
||||
- Run a subset of test by specifying different `testDir` for each project.
|
||||
- Run tests in multiple configurations, for example with desktop Chromium and emulating Chrome for Android.
|
||||
- Run "core" tests without retries to ensure stability of the core functionality, and use `retries` for other tests.
|
||||
- And much more! See [project options][TestProject] for the list of options available to each project.
|
||||
### Different tests, different configuration
|
||||
|
||||
Each project can be configured separately, and run different set of tests with different options. You can use [`property: TestProject.testDir`], [`property: TestProject.testMatch`] and [`property: TestProject.testIgnore`] to configure which tests should the project run.
|
||||
|
||||
Here is an example that runs projects with different tests and configurations. The "Smoke" project runs a small subset of tests without retries, and "Default" project runs all other tests with retries.
|
||||
|
||||
```js js-flavor=ts
|
||||
// playwright.config.ts
|
||||
import { PlaywrightTestConfig } from '@playwright/test';
|
||||
const config: PlaywrightTestConfig = {
|
||||
timeout: 60000, // Timeout is shared between all tests.
|
||||
projects: [
|
||||
{
|
||||
name: 'Smoke',
|
||||
testMatch: /.*smoke.spec.ts/,
|
||||
retries: 0,
|
||||
},
|
||||
{
|
||||
name: 'Default',
|
||||
testIgnore: /.*smoke.spec.ts/,
|
||||
retries: 2,
|
||||
},
|
||||
],
|
||||
};
|
||||
export default config;
|
||||
```
|
||||
|
||||
```js js-flavor=js
|
||||
// playwright.config.js
|
||||
// @ts-check
|
||||
/** @type {import('@playwright/test').PlaywrightTestConfig} */
|
||||
const config = {
|
||||
timeout: 60000, // Timeout is shared between all tests.
|
||||
projects: [
|
||||
{
|
||||
name: 'Smoke',
|
||||
testMatch: /.*smoke.spec.ts/,
|
||||
retries: 0,
|
||||
},
|
||||
{
|
||||
name: 'Default',
|
||||
testIgnore: /.*smoke.spec.ts/,
|
||||
retries: 2,
|
||||
},
|
||||
],
|
||||
};
|
||||
module.exports = config;
|
||||
```
|
||||
|
||||
You can run all projects or just a single one:
|
||||
```bash
|
||||
# Run both projects
|
||||
npx playwright test
|
||||
|
||||
# Run a single project
|
||||
npx playwright test --project=Smoke
|
||||
```
|
||||
|
||||
### Custom project parameters
|
||||
|
||||
Projects can be also used to parametrize tests with your custom configuration - take a look at [this separate guide](./test-parameterize.md#parameterized-projects).
|
||||
|
||||
## WorkerInfo object
|
||||
|
||||
Depending on the configuration and failures, Playwright Test might use different number of worker processes to run all the tests. For example, Playwright Test will always start a new worker process after a failing test.
|
||||
|
||||
Worker-scoped fixtures receive a [WorkerInfo] parameter that describes the current worker configuration. See [WorkerInfo] properties for available worker information.
|
||||
|
||||
Consider an example where we run a new http server per worker process, and use `workerIndex` to produce a unique port number:
|
||||
|
||||
```js js-flavor=js
|
||||
// my-test.js
|
||||
const base = require('@playwright/test');
|
||||
const http = require('http');
|
||||
|
||||
// Note how we mark the fixture as { scope: 'worker' }.
|
||||
// Also note that we pass empty {} first, since we do not declare any test fixtures.
|
||||
exports.test = base.test.extend({
|
||||
server: [ async ({}, use, workerInfo) => {
|
||||
// Start the server.
|
||||
const server = http.createServer();
|
||||
server.listen(9000 + workerInfo.workerIndex);
|
||||
await new Promise(ready => server.once('listening', ready));
|
||||
|
||||
// Use the server in the tests.
|
||||
await use(server);
|
||||
|
||||
// Cleanup.
|
||||
await new Promise(done => server.close(done));
|
||||
}, { scope: 'worker' } ]
|
||||
});
|
||||
```
|
||||
|
||||
```js js-flavor=ts
|
||||
// my-test.ts
|
||||
import { test as base } from '@playwright/test';
|
||||
import * as http from 'http';
|
||||
|
||||
// Note how we mark the fixture as { scope: 'worker' }.
|
||||
// Also note that we pass empty {} first, since we do not declare any test fixtures.
|
||||
export const test = base.extend<{}, { server: http.Server }>({
|
||||
server: [ async ({}, use, workerInfo) => {
|
||||
// Start the server.
|
||||
const server = http.createServer();
|
||||
server.listen(9000 + workerInfo.workerIndex);
|
||||
await new Promise(ready => server.once('listening', ready));
|
||||
|
||||
// Use the server in the tests.
|
||||
await use(server);
|
||||
|
||||
// Cleanup.
|
||||
await new Promise(done => server.close(done));
|
||||
}, { scope: 'worker' } ]
|
||||
});
|
||||
```
|
||||
|
||||
## Add custom matchers using expect.extend
|
||||
|
||||
|
||||
@ -182,6 +182,8 @@ Whether to record trace for each test. Defaults to `'off'`.
|
||||
* `'retain-on-failure'`: Record trace for each test, but remove all traces from successful test runs.
|
||||
* `'on-first-retry'`: Record trace only when retrying a test for the first time.
|
||||
|
||||
For more control, pass an object that specifies `mode` and trace features to enable.
|
||||
|
||||
Learn more about [recording trace](./test-configuration.md#record-test-trace).
|
||||
|
||||
## property: TestOptions.userAgent = %%-context-option-useragent-%%
|
||||
@ -199,7 +201,7 @@ Whether to record video for each test. Defaults to `'off'`.
|
||||
* `'retain-on-failure'`: Record video for each test, but remove all videos from successful test runs.
|
||||
* `'on-first-retry'`: Record video only when retrying a test for the first time.
|
||||
|
||||
If video size is not specified, it will be equal to [`property: TestOptions.viewport`] scaled down to fit into 800x800. If `viewport` is not configured explicitly the video size defaults to 800x450. Actual picture of each page will be scaled down if necessary to fit the specified size.
|
||||
To control video size, pass an object with `mode` and `size` properties. If video size is not specified, it will be equal to [`property: TestOptions.viewport`] scaled down to fit into 800x800. If `viewport` is not configured explicitly the video size defaults to 800x450. Actual picture of each page will be scaled down if necessary to fit the specified size.
|
||||
|
||||
Learn more about [recording video](./test-configuration.md#record-video).
|
||||
|
||||
|
||||
@ -93,10 +93,10 @@ The same works inside describe.
|
||||
const { test, expect } = require('@playwright/test');
|
||||
|
||||
test.describe('locale block', () => {
|
||||
// Run tests in this describe block with specified locale.
|
||||
test.use({ locale: 'en-US' });
|
||||
// Run tests in this describe block with portrait-like viewport.
|
||||
test.use({ viewport: { width: 600, height: 900 } });
|
||||
|
||||
test('my locale test', async ({ page }) => {
|
||||
test('my portrait test', async ({ page }) => {
|
||||
// ...
|
||||
});
|
||||
});
|
||||
@ -107,10 +107,10 @@ test.describe('locale block', () => {
|
||||
import { test, expect } from '@playwright/test';
|
||||
|
||||
test.describe('locale block', () => {
|
||||
// Run tests in this describe block with specified locale.
|
||||
test.use({ locale: 'en-US' });
|
||||
// Run tests in this describe block with portrait-like viewport.
|
||||
test.use({ viewport: { width: 600, height: 900 } });
|
||||
|
||||
test('my locale test', async ({ page }) => {
|
||||
test('my portrait test', async ({ page }) => {
|
||||
// ...
|
||||
});
|
||||
});
|
||||
@ -453,7 +453,7 @@ Playwright Test can record videos for your tests, controlled by the `video` opti
|
||||
- `'retain-on-failure'` - Record video for each test, but remove all videos from successful test runs.
|
||||
- `'on-first-retry'` - Record video only when retrying a test for the first time.
|
||||
|
||||
Video files will appear in the test output directory, typically `test-results`.
|
||||
Video files will appear in the test output directory, typically `test-results`. See [`property: TestOptions.video`] for advanced video configuration.
|
||||
|
||||
```js js-flavor=js
|
||||
// @ts-check
|
||||
@ -487,7 +487,7 @@ Playwright Test can produce test traces while running the tests. Later on, you c
|
||||
- `'retain-on-failure'` - Record trace for each test, but remove it from successful test runs.
|
||||
- `'on-first-retry'` - Record trace only when retrying a test for the first time.
|
||||
|
||||
Trace files will appear in the test output directory, typically `test-results`.
|
||||
Trace files will appear in the test output directory, typically `test-results`. See [`property: TestOptions.trace`] for advanced video configuration.
|
||||
|
||||
```js js-flavor=js
|
||||
// @ts-check
|
||||
@ -514,7 +514,7 @@ export default config;
|
||||
|
||||
## More browser and context options
|
||||
|
||||
Any options accepted by [`method: BrowserType.launch`] or [`method: Browser.newContext`] can be put into `launchOptions` or `contextOptions` respectively in the `use` section.
|
||||
Any options accepted by [`method: BrowserType.launch`] or [`method: Browser.newContext`] can be put into `launchOptions` or `contextOptions` respectively in the `use` section. Take a look at the [full list of available options][TestOptions].
|
||||
|
||||
```js js-flavor=js
|
||||
// @ts-check
|
||||
@ -547,7 +547,7 @@ However, most common ones like `headless` or `viewport` are available directly i
|
||||
|
||||
## Testing options
|
||||
|
||||
In addition to configuring [Browser] or [BrowserContext], videos or screenshots, Playwright Test has many options to configure how your tests are run. Below are the most common ones, see [advanced configuration](./test-advanced.md) for the full list.
|
||||
In addition to configuring [Browser] or [BrowserContext], videos or screenshots, Playwright Test has many options to configure how your tests are run. Below are the most common ones, see [TestConfig] for the full list.
|
||||
|
||||
- `forbidOnly`: Whether to exit with an error if any tests are marked as `test.only`. Useful on CI.
|
||||
- `globalSetup`: Path to the global setup file. This file will be required and run before all the tests. It must export a single function.
|
||||
|
||||
10
packages/playwright-test/types/test.d.ts
vendored
10
packages/playwright-test/types/test.d.ts
vendored
@ -2720,6 +2720,8 @@ export interface PlaywrightWorkerOptions {
|
||||
* - `'retain-on-failure'`: Record trace for each test, but remove all traces from successful test runs.
|
||||
* - `'on-first-retry'`: Record trace only when retrying a test for the first time.
|
||||
*
|
||||
* For more control, pass an object that specifies `mode` and trace features to enable.
|
||||
*
|
||||
* Learn more about [recording trace](https://playwright.dev/docs/test-configuration#record-test-trace).
|
||||
*/
|
||||
trace: TraceMode | /** deprecated */ 'retry-with-trace' | { mode: TraceMode, snapshots?: boolean, screenshots?: boolean, sources?: boolean };
|
||||
@ -2730,10 +2732,10 @@ export interface PlaywrightWorkerOptions {
|
||||
* - `'retain-on-failure'`: Record video for each test, but remove all videos from successful test runs.
|
||||
* - `'on-first-retry'`: Record video only when retrying a test for the first time.
|
||||
*
|
||||
* If video size is not specified, it will be equal to
|
||||
* [testOptions.viewport](https://playwright.dev/docs/api/class-testoptions#test-options-viewport) scaled down to fit into
|
||||
* 800x800. If `viewport` is not configured explicitly the video size defaults to 800x450. Actual picture of each page will
|
||||
* be scaled down if necessary to fit the specified size.
|
||||
* To control video size, pass an object with `mode` and `size` properties. If video size is not specified, it will be
|
||||
* equal to [testOptions.viewport](https://playwright.dev/docs/api/class-testoptions#test-options-viewport) scaled down to
|
||||
* fit into 800x800. If `viewport` is not configured explicitly the video size defaults to 800x450. Actual picture of each
|
||||
* page will be scaled down if necessary to fit the specified size.
|
||||
*
|
||||
* Learn more about [recording video](https://playwright.dev/docs/test-configuration#record-video).
|
||||
*/
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user