docs: add test runner docs (#6784)

This commit is contained in:
Pavel Feldman 2021-05-27 20:30:03 -07:00 committed by GitHub
parent 93a0efa832
commit 0f760627fa
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
14 changed files with 1168 additions and 386 deletions

323
docs/src/test-advanced.md Normal file
View File

@ -0,0 +1,323 @@
---
id: test-advanced
title: "Advanced Configuration"
---
<!-- TOC -->
<br/>
## Projects
Playwright Test supports running multiple test projects at the same time. This is useful for running the same tests in multiple configurations. For example, consider running tests against multiple versions of the database.
To make use of this feature, we will declare an "option fixture" for the database version, and use it in the tests.
```ts
// my-test.ts
import { test as base } from 'playwright/test';
const test = base.extend<{ version: string, database: Database }>({
// Default value for the version.
version: '1.0',
// Use version when connecting to the database.
database: async ({ version }, use) => {
const db = await connectToDatabase(version);
await use(db);
await db.close();
},
});
```
We can use our fixtures in the test.
```ts
// example.spec.ts
import test from './my-test';
test('test 1', async ({ database }) => {
// Test code goes here.
});
test('test 2', async ({ version, database }) => {
test.fixme(version === '2.0', 'This feature is not implemented in 2.0 yet');
// Test code goes here.
});
```
Now, we can run test in multiple configurations by using projects.
```ts
// pwtest.config.ts
import { PlaywrightTestConfig } from 'playwright/test';
const config: PlaywrightTestConfig = {
timeout: 20000,
projects: [
{
name: 'v1',
use: { version: '1.0' },
},
{
name: 'v2',
use: { version: '2.0' },
},
]
};
export default config;
```
Each project can be configured separately, and run different set of tests with different parameters.
Supported options are `name`, `outputDir`, `repeatEach`, `retries`, `snapshotDir`, `testDir`, `testIgnore`, `testMatch` and `timeout`. See [configuration object](#configuration-object) for detailed description.
You can run all projects or just a single one:
```sh
# Run both projects - each test will be run twice
npx playwright test
# Run a single project - each test will be run once
npx playwright test --project=v2
```
## workerInfo object
Depending on the configuration and failures, Playwright Test might use different number of worker processes to run all the tests. For example, Playwright Test will always start a new worker process after a failing test.
Worker-scoped fixtures and `beforeAll` and `afterAll` hooks receive `workerInfo` parameter. The following information is accessible from the `workerInfo`:
- `config` - [Configuration object](#configuration-object).
- `project` - Specific [project](#projects) configuration for this worker. Different projects are always run in separate processes.
- `workerIndex: number` - A unique sequential index assigned to the worker process.
Consider an example where we run a new http server per worker process, and use `workerIndex` to produce a unique port number:
```ts
// my-test.ts
import { test as base } from 'playwright/test';
import * as http from 'http';
// Note how we mark the fixture as { scope: 'worker' }.
// Also note that we pass empty {} first, since we do not declare any test fixtures.
const test = base.extend<{}, { server: http.Server }>({
server: [ async ({}, use, workerInfo) => {
// Start the server.
const server = http.createServer();
server.listen(9000 + workerInfo.workerIndex);
await new Promise(ready => server.once('listening', ready));
// Use the server in the tests.
await use(server);
// Cleanup.
await new Promise(done => server.close(done));
}, { scope: 'worker' } ]
});
export default test;
```
## testInfo object
Test fixtures and `beforeEach` and `afterEach` hooks receive `testInfo` parameter. It is also available to the test function as a second parameter.
In addition to everything from the [`workerInfo`](#workerinfo), the following information is accessible before and during the test:
- `title: string` - Test title.
- `file: string` - Full path to the test file.
- `line: number` - Line number of the test declaration.
- `column: number` - Column number of the test declaration.
- `fn: Function` - Test body function.
- `repeatEachIndex: number` - The sequential repeat index.
- `retry: number` - The sequential number of the test retry (zero means first run).
- `expectedStatus: 'passed' | 'failed' | 'timedOut'` - Whether this test is expected to pass, fail or timeout.
- `timeout: number` - Test timeout.
- `annotations` - [Annotations](#annotations) that were added to the test.
- `snapshotPathSegment: string` - Relative path, used to locate snapshots for the test.
- `snapshotPath(...pathSegments: string[])` - Function that returns the full path to a particular snapshot for the test.
- `outputDir: string` - Absolute path to the output directory for this test run.
- `outputPath(...pathSegments: string[])` - Function that returns the full path to a particular output artifact for the test.
The following information is accessible after the test body has finished, in fixture teardown:
- `duration: number` - test running time in milliseconds.
- `status: 'passed' | 'failed' | 'timedOut'` - the actual test result.
- `error` - any error thrown by the test body.
- `stdout: (string | Buffer)[]` - array of stdout chunks collected during the test run.
- `stderr: (string | Buffer)[]` - array of stderr chunks collected during the test run.
Here is an example test that saves some information:
```ts
// example.spec.ts
import { test } from 'playwright/test';
test('my test needs a file', async ({ table }, testInfo) => {
// Do something with the table...
// ... and then save contents.
const filePath = testInfo.outputPath('table.dat');
await table.saveTo(filePath);
});
```
Here is an example fixture that automatically saves debug logs when the test fails:
```ts
// my-test.ts
import * as debug from 'debug';
import * as fs from 'fs';
import { test as base } from 'playwright/test';
// Note how we mark the fixture as { auto: true }.
// This way it is always instantiated, even if the test does not use it explicitly.
const test = base.extend<{ saveLogs: void }>({
saveLogs: [ async ({}, use, testInfo) => {
const logs = [];
debug.log = (...args) => logs.push(args.map(String).join(''));
debug.enable('mycomponent');
await use();
if (testInfo.status !== testInfo.expectedStatus)
fs.writeFileSync(testInfo.outputPath('logs.txt'), logs.join('\n'), 'utf8');
}, { auto: true } ]
});
export default test;
```
## Global setup and teardown
To set something up once before running all tests, use `globalSetup` option in the [configuration file](#writing-a-configuration-file). Similarly, use `globalTeardown` to run something once after all the tests.
```ts
// global-setup.ts
import * as http from 'http';
module.exports = async () => {
const server = http.createServer(app);
await new Promise(done => server.listen(done));
process.env.SERVER_PORT = String(server.address().port); // Expose port to the tests.
global.__server = server; // Save the server for the teardown.
};
```
```ts
// global-teardown.ts
module.exports = async () => {
await new Promise(done => global.__server.close(done));
};
```
```ts
// pwtest.config.ts
import { PlaywrightTestConfig } from 'playwright/test';
const config: PlaywrightTestConfig = {
globalSetup: 'global-setup.ts',
globalTeardown: 'global-teardown.ts',
};
export default config;
```
## Fixture options
It is common for the [fixtures](#fixtures) to be configurable, based on various test needs.
Playwright Test allows creating "options" fixture for this purpose.
```ts
// my-test.ts
import { test as base } from 'playwright/test';
const test = base.extend<{ dirCount: number, dirs: string[] }>({
// Define an option that can be configured in tests with `test.use()`.
// Provide a default value.
dirCount: 1,
// Define a fixture that provides some useful functionality to the test.
// In this example, it will supply some temporary directories.
// Our fixture uses the "dirCount" option that can be configured by the test.
dirs: async ({ dirCount }, use, testInfo) => {
const dirs = [];
for (let i = 0; i < dirCount; i++)
dirs.push(testInfo.outputPath('dir-' + i));
// Use the list of directories in the test.
await use(dirs);
// Cleanup if needed.
},
});
export default test;
```
We can now pass the option value with `test.use()`.
```ts
// example.spec.ts
import test from './my-test';
// Here we define the option value. Tests in this file need two temporary directories.
test.use({ dirCount: 2 });
test('my test title', async ({ dirs }) => {
// Test can use "dirs" right away - the fixture has already run and created two temporary directories.
test.expect(dirs.length).toBe(2);
});
```
In addition to `test.use()`, we can also specify options in the configuration file.
```ts
// pwtest.config.ts
import { PlaywrightTestConfig } from 'playwright/test';
const config: PlaywrightTestConfig = {
// All tests will get three directories by default, unless it is overridden with test.use().
use: { dirCount: 3 },
};
export default config;
```
### Add custom matchers using expect.extend
Playwright Test uses [expect](https://jestjs.io/docs/expect) under the hood which has the functionality to extend it with [custom matchers](https://jestjs.io/docs/expect#expectextendmatchers). See the following example where a custom `toBeWithinRange` function gets added.
```ts
// pwtest.config.ts
import * as pwtest from 'playwright/test';
pwtest.expect.extend({
toBeWithinRange(received: number, floor: number, ceiling: number) {
const pass = received >= floor && received <= ceiling;
if (pass) {
return {
message: () => 'passed',
pass: true,
};
} else {
return {
message: () => 'failed',
pass: false,
};
}
},
});
const config = {};
export default config;
```
```ts
// example.spec.ts
import { test } from 'playwright/test';
test('numeric ranges', () => {
test.expect(100).toBeWithinRange(90, 110);
test.expect(101).not.toBeWithinRange(0, 100);
});
```
```ts
// global.d.ts
declare namespace folio {
interface Matchers<R> {
toBeWithinRange(a: number, b: number): R;
}
}
```
To import expect matching libraries like [jest-extended](https://github.com/jest-community/jest-extended#installation) you can import it from your `globals.d.ts`:
```ts
// global.d.ts
import 'jest-extended';
```

View File

@ -0,0 +1,29 @@
---
id: test-annotations
title: "Annotations"
---
Sadly, tests do not always pass. Playwright Test supports test annotations to deal with failures, flakiness and tests that are not yet ready.
```ts
// example.spec.ts
import { test } from 'playwright/test';
test('basic', async ({ table }) => {
test.skip(version == 'v2', 'This test crashes the database in v2, better not run it.');
// Test goes here.
});
test('can insert multiple rows', async ({ table }) => {
test.fail('Broken test, but we should fix it!');
// Test goes here.
});
```
Annotations may be conditional, in which case they only apply when the condition is truthy. Annotations may depend on test arguments. There could be multiple annotations on the same test, possibly in different configurations.
Possible annotations include:
- `skip` marks the test as irrelevant. Playwright Test does not run such a test. Use this annotation when the test is not applicable in some configuration.
- `fail` marks the test as failing. Playwright Test will run this test and ensure it does indeed fail. If the test does not fail, Playwright Test will complain.
- `fixme` marks the test as failing. Playwright Test will not run this test, as opposite to the `fail` annotation. Use `fixme` when running the test is slow or crashy.
- `slow` marks the test as slow and triples the test timeout.

30
docs/src/test-cli.md Normal file
View File

@ -0,0 +1,30 @@
---
id: test-cli
title: "Command Line"
---
```sh
# Ask for help!
npx playwright test --help
```
Arguments passed to `npx playwright test` are treated as a filter for test files. For example, `npx playwright test my-spec` will only run tests from files with `my-spec` in the name.
All the options are available in the [configuration file](#writing-a-configuration-file). However, selected options can be passed to a command line and take a priority over the configuration file:
- `--config <file>` or `-c <file>`: Configuration file. Defaults to `pwtest.config.ts` or `pwtest.config.js` in the current directory.
- `--forbid-only`: Whether to disallow `test.only` exclusive tests. Useful on CI. Overrides `config.forbidOnly` option from the configuration file.
- `--grep <grep>` or `-g <grep>`: Only run tests matching this regular expression, for example `/my.*test/i` or `my-test`. Overrides `config.grep` option from the configuration file.
- `--global-timeout <number>`: Total timeout in milliseconds for the whole test run. By default, there is no global timeout. Overrides `config.globalTimeout` option from the configuration file.
- `--help`: Display help.
- `--list`: List all the tests, but do not run them.
- `--max-failures <N>` or `-x`: Stop after the first `N` test failures. Passing `-x` stops after the first failure. Overrides `config.maxFailures` option from the configuration file.
- `--output <dir>`: Directory for artifacts produced by tests, defaults to `test-results`. Overrides `config.outputDir` option from the configuration file.
- `--quiet`: Whether to suppress stdout and stderr from the tests. Overrides `config.quiet` option from the configuration file.
- `--repeat-each <number>`: Specifies how many times to run each test. Defaults to one. Overrides `config.repeatEach` option from the configuration file.
- `--reporter <reporter>`. Specify reporter to use, comma-separated, can be some combination of `dot`, `json`, `junit`, `line`, `list` and `null`. See [reporters](#reporters) for more information.
- `--retries <number>`: The maximum number of retries for each [flaky test](#flaky-tests), defaults to zero (no retries). Overrides `config.retries` option from the configuration file.
- `--shard <shard>`: [Shard](#shards) tests and execute only selected shard, specified in the form `current/all`, 1-based, for example `3/5`. Overrides `config.shard` option from the configuration file.
- `--project <project...>`: Only run tests from one of the specified [projects](#projects). Defaults to running all projects defined in the configuration file.
- `--timeout <number>`: Maximum timeout in milliseconds for each test, defaults to 10 seconds. Overrides `config.timeout` option from the configuration file.
- `--update-snapshots` or `-u`: Whether to update snapshots with actual results instead of comparing them. Use this when snapshot expectations have changed. Overrides `config.updateSnapshots` option from the configuration file.
- `--workers <workers>` or `-j <workers>`: The maximum number of concurrent worker processes. Overrides `config.workers` option from the configuration file.

View File

@ -0,0 +1,130 @@
---
id: test-configuration
title: "Configuration"
---
<!-- TOC -->
<br/>
## Configuration object
Configuration file exports a single configuration object.
You can modify browser launch options, context creation options and testing options either globally in the configuration file, or locally in the test file.
See the full list of launch options in [`browserType.launch()`](https://playwright.dev/docs/api/class-browsertype#browsertypelaunchoptions) documentation.
See the full list of context options in [`browser.newContext()`](https://playwright.dev/docs/api/class-browser#browsernewcontextoptions) documentation.
```ts
// pwtest.config.ts
import { PlaywrightTestConfig } from 'playwright/test';
const config: PlaywrightTestConfig = {
// 20 seconds per test.
timeout: 20000,
// Forbid test.only on CI.
forbidOnly: !!process.env.CI,
// Two retries for each test.
retries: 2,
});
export default config;
```
## Global configuration
You can specify different options for each browser using projects in the configuration file. Below is an example that changes some global testing options, and Chromium browser configuration.
```js
// config.ts
import { PlaywrightTestConfig } from "playwright/test";
const config: PlaywrightTestConfig = {
// Each test is given 90 seconds.
timeout: 90000,
// Failing tests will be retried at most two times.
retries: 2,
projects: [
{
name: 'chromium',
use: {
browserName: 'chromium',
// Launch options
headless: false,
slowMo: 50,
// Context options
viewport: { width: 800, height: 600 },
ignoreHTTPSErrors: true,
// Testing options
video: 'retain-on-failure',
},
},
],
};
export default config;
```
## Local configuration
With `test.use()` you can override some options for a file, or a `describe` block.
```js
// my.spec.ts
import { test, expect } from "playwright/test";
// Run tests in this file with portrait-like viewport.
test.use({ viewport: { width: 600, height: 900 } });
test('my test', async ({ page }) => {
// Test code goes here.
});
```
## Test Options
- `metadata: any` - Any JSON-serializable metadata that will be put directly to the test report.
- `name: string` - Project name, useful when defining multiple [test projects](#projects).
- `outputDir: string` - Output directory for files created during the test run.
- `repeatEach: number` - The number of times to repeat each test, useful for debugging flaky tests. Overridden by `--repeat-each` command line option.
- `retries: number` - The maximum number of retry attempts given to failed tests. Overridden by `--retries` command line option.
- `screenshot: 'off' | 'on' | 'only-on-failure'` - Whether to capture a screenshot after each test, off by default.
- `off` - Do not capture screenshots.
- `on` - Capture screenshot after each test.
- `only-on-failure` - Capture screenshot after each test failure.
- `snapshotDir: string` - [Snapshots](#snapshots) directory. Overridden by `--snapshot-dir` command line option.
- `testDir: string` - Directory that will be recursively scanned for test files.
- `testIgnore: string | RegExp | (string | RegExp)[]` - Files matching one of these patterns are not considered test files.
- `testMatch: string | RegExp | (string | RegExp)[]` - Only the files matching one of these patterns are considered test files.
- `timeout: number` - Timeout for each test in milliseconds. Overridden by `--timeout` command line option.
- `video: 'off' | 'on' | 'retain-on-failure' | 'retry-with-video'` - Whether to record video for each test, off by default.
- `off` - Do not record video.
- `on` - Record video for each test.
- `retain-on-failure` - Record video for each test, but remove all videos from successful test runs.
- `retry-with-video` - Record video only when retrying a test.
## Test run options
These options would be typically different between local development and CI operation:
- `forbidOnly: boolean` - Whether to exit with an error if any tests are marked as `test.only`. Useful on CI. Overridden by `--forbid-only` command line option.
- `globalSetup: string` - Path to the global setup file. This file will be required and run before all the tests. It must export a single function.
- `globalTeardown: string` - Path to the global teardown file. This file will be required and run after all the tests. It must export a single function.
- `globalTimeout: number` - Total timeout in milliseconds for the whole test run. Overridden by `--global-timeout` command line option.
- `grep: RegExp | RegExp[]` - Patterns to filter tests based on their title. Overridden by `--grep` command line option.
- `maxFailures: number` - The maximum number of test failures for this test run. After reaching this number, testing will stop and exit with an error. Setting to zero (default) disables this behavior. Overridden by `--max-failures` and `-x` command line options.
- `preserveOutput: 'always' | 'never' | 'failures-only'` - Whether to preserve test output in the `outputDir`:
- `'always'` - preserve output for all tests;
- `'never'` - do not preserve output for any tests;
- `'failures-only'` - only preserve output for failed tests.
- `projects: Project[]` - Multiple [projects](#projects) configuration.
- `reporter: 'list' | 'line' | 'dot' | 'json' | 'junit'` - The reporter to use. See [reporters](#reporters) for details.
- `quiet: boolean` - Whether to suppress stdout and stderr from the tests. Overridden by `--quiet` command line option.
- `shard: { total: number, current: number } | null` - [Shard](#shards) information. Overridden by `--shard` command line option.
- `updateSnapshots: boolean` - Whether to update expected snapshots with the actual results produced by the test run. Overridden by `--update-snapshots` command line option.
- `workers: number` - The maximum number of concurrent worker processes to use for parallelizing tests. Overridden by `--workers` command line option.

View File

@ -1,16 +1,18 @@
---
id: test-runner-examples
id: test-examples
title: "Examples"
---
<!-- TOC -->
<br/>
## Multiple pages
The default `context` argument is a [BrowserContext][browser-context]. Browser contexts are isolated execution environments that can host multiple pages. See [multi-page scenarios](./multi-pages.md) for more examples.
```js
import { test } from "@playwright/test";
import { test } from "playwright/test";
test("tests on multiple web pages", async ({ context }) => {
const pageFoo = await context.newPage();
@ -25,7 +27,7 @@ test("tests on multiple web pages", async ({ context }) => {
```js
// config.ts
import { PlaywrightTestConfig } from "@playwright/test";
import { PlaywrightTestConfig } from "playwright/test";
import { devices } from "playwright";
const config: PlaywrightTestConfig = {
@ -50,7 +52,7 @@ Define a custom route that mocks network calls for a browser context.
```js
// In foo.spec.ts
import { test, expect } from "@playwright/test";
import { test, expect } from "playwright/test";
test.beforeEach(async ({ context }) => {
// Block any css requests for each test in this file.
@ -71,7 +73,7 @@ test("loads page without css", async ({ page }) => {
The `expect` API supports visual comparisons with `toMatchSnapshot`. This uses the [pixelmatch](https://github.com/mapbox/pixelmatch) library, and you can pass `threshold` as an option.
```js
import { test, expect } from "@playwright/test";
import { test, expect } from "playwright/test";
test("compares page screenshot", async ({ page }) => {
await page.goto("https://stackoverflow.com");
@ -84,7 +86,7 @@ On first execution, this will generate golden snapshots. Subsequent runs will co
```sh
# Update golden snapshots when they differ from actual
npx folio --update-snapshots
npx playwright test --update-snapshots
```
### Page object model
@ -118,7 +120,7 @@ export class LoginPage {
Use the `LoginPage` class in the tests.
```js
// my.spec.ts
import { test, expect } from "@playwright/test";
import { test, expect } from "playwright/test";
import { LoginPage } from "./login-page";
test('login works', async ({ page }) => {

219
docs/src/test-fixtures.md Normal file
View File

@ -0,0 +1,219 @@
---
id: test-fixtures
title: "Test fixtures"
---
<!-- TOC -->
<br/>
## Introduction to fixtures
Playwright Test is based on the concept of the test fixtures. Test fixtures are used to establish environment for each test, giving the test everything it needs and nothing else. Test fixtures are isolated between tests, which gives Playwright Test following benefits:
- Playwright Test runs tests in parallel by default, making your test suite much faster.
- Playwright Test can efficiently retry the flaky failures, instead of re-running the whole suite.
- You can group tests based on their meaning, instead of their common setup.
Here is how typical test environment setup differs between traditional test style and the fixture-based one:
### Without fixtures
```ts
// example.spec.ts
describe('database', () => {
let table;
beforeEach(async ()=> {
table = await createTable();
});
afterEach(async () => {
await dropTable(table);
});
test('create user', () => {
table.insert();
// ...
});
test('update user', () => {
table.insert();
table.update();
// ...
});
test('delete user', () => {
table.insert();
table.delete();
// ...
});
});
```
### With fixtures
```ts
// example.spec.ts
import { test as base } from 'playwright/test';
// Extend basic test by providing a "table" fixture.
const test = base.extend<{ table: Table }>({
table: async ({}, use) => {
const table = await createTable();
await use(table);
await dropTable(table);
},
});
test('create user', ({ table }) => {
table.insert();
// ...
});
test('update user', ({ table }) => {
table.insert();
table.update();
// ...
});
test('delete user', ({ table }) => {
table.insert();
table.delete();
// ...
});
```
You declare exact fixtures that the test needs and the runner initializes them for each test individually. Tests can use any combinations of the fixtures to tailor precise environment they need. You no longer need to wrap tests in `describe`s that set up environment, everything is declarative and typed.
There are two types of fixtures: `test` and `worker`. Test fixtures are set up for each test and worker fixtures are set up for each process that runs test files.
## Test fixtures
Test fixtures are set up for each test. Consider the following test file:
```ts
// hello.spec.ts
import test from './hello';
test('hello', ({ hello }) => {
test.expect(hello).toBe('Hello');
});
test('hello world', ({ helloWorld }) => {
test.expect(helloWorld).toBe('Hello, world!');
});
```
It uses fixtures `hello` and `helloWorld` that are set up by the framework for each test run.
Here is how test fixtures are declared and defined. Fixtures can use other fixtures - note how `helloWorld` uses `hello`.
```ts
// hello.ts
import { test as base } from 'playwright/test';
// Define test fixtures "hello" and "helloWorld".
type TestFixtures = {
hello: string;
helloWorld: string;
};
// Extend base test with our fixtures.
const test = base.extend<TestFixtures>({
// This fixture is a constant, so we can just provide the value.
hello: 'Hello',
// This fixture has some complex logic and is defined with a function.
helloWorld: async ({ hello }, use) => {
// Set up the fixture.
const value = hello + ', world!';
// Use the fixture value in the test.
await use(value);
// Clean up the fixture. Nothing to cleanup in this example.
},
});
// Now, this "test" can be used in multiple test files, and each of them will get the fixtures.
export default test;
```
With fixtures, test organization becomes flexible - you can put tests that make sense next to each other based on what they test, not based on the environment they need.
## Worker fixtures
Playwright Test uses worker processes to run test files. You can specify the maximum number of workers using `--workers` command line option. Similarly to how test fixtures are set up for individual test runs, worker fixtures are set up for each worker process. That's where you can set up services, run servers, etc. Playwright Test will reuse the worker process for as many test files as it can, provided their worker fixtures match and hence environments are identical.
Here is how the test looks:
```ts
// express.spec.ts
import test from './express-test';
import fetch from 'node-fetch';
test('fetch 1', async ({ port }) => {
const result = await fetch(`http://localhost:${port}/1`);
test.expect(await result.text()).toBe('Hello World 1!');
});
test('fetch 2', async ({ port }) => {
const result = await fetch(`http://localhost:${port}/2`);
test.expect(await result.text()).toBe('Hello World 2!');
});
```
And here is how fixtures are declared and defined:
```ts
// express-test.ts
import { test as base } from 'playwright/test';
import express from 'express';
import type { Express } from 'express';
// Declare worker fixtures.
type ExpressWorkerFixtures = {
port: number;
express: Express;
};
// Note that we did not provide an test-scoped fixtures, so we pass {}.
const test = base.extend<{}, ExpressWorkerFixtures>({
// We pass a tuple to with the fixture function and options.
// In this case, we mark this fixture as worker-scoped.
port: [ async ({}, use, workerInfo) => {
// "port" fixture uses a unique value of the worker process index.
await use(3000 + workerInfo.workerIndex);
}, { scope: 'worker' } ],
// "express" fixture starts automatically for every worker - we pass "auto" for that.
express: [ async ({ port }, use) => {
// Setup express app.
const app = express();
app.get('/1', (req, res) => {
res.send('Hello World 1!')
});
app.get('/2', (req, res) => {
res.send('Hello World 2!')
});
// Start the server.
let server;
console.log('Starting server...');
await new Promise(f => {
server = app.listen(port, f);
});
console.log('Server ready');
// Use the server in the tests.
await use(server);
// Cleanup.
console.log('Stopping server...');
await new Promise(f => server.close(f));
console.log('Server stopped');
}, { scope: 'worker', auto: true } ],
});
export default test;
```

207
docs/src/test-intro.md Normal file
View File

@ -0,0 +1,207 @@
---
id: test-intro
title: "Playwright Tests"
---
Playwright Test Runner was created specifically to accommodate the needs of the end-to-end testing. It does everything you would expect from the regular test runner, and more. Playwright test allows to:
- Run tests across all browsers.
- Execute tests in parallel.
- Enjoy context isolation out of the box.
- Capture videos, screenshots and other artifacts on failure.
- Integrate your POMs as extensible fixtures.
<br/>
<!-- TOC -->
<br/>
## Installation
Playwright already includes a test runner for end-to-end tests.
```sh
npm i -D playwright
```
## First test
Create `tests/foo.spec.ts` to define your test.
```js
import { test, expect } from 'playwright/test';
test('is a basic test with the page', async ({ page }) => {
await page.goto('https://playwright.dev/');
const name = await page.innerText('.navbar__title');
expect(name).toBe('Playwright');
});
```
Now run your tests:
```sh
# Assuming that test files are in the tests directory.
npx pwtest -c tests
```
Playwright Test just ran a test using Chromium browser, in a headless manner. Let's tell it to use headed browser:
```sh
# Assuming that test files are in the tests directory.
npx pwtest -c tests --headed
```
What about other browsers? Let's run the same test using Firefox:
```sh
# Assuming that test files are in the tests directory.
npx pwtest -c tests --browser=firefox
```
And finally, on all three browsers:
```sh
# Assuming that test files are in the tests directory.
npx pwtest -c tests --browser=all
```
Refer to [configuration](./test-configuration.md) section for configuring test runs in different modes with different browsers.
## Test fixtures
You noticed an argument `{ page }` that the test above has access to:
```js
test('basic test', async ({ page }) => {
...
```
We call these arguments `fixtures`. Fixtures are objects that are created for each test run. Playwright Test comes loaded with those fixtures, and you can add your own fixtures as well. When running tests, Playwright Test looks at each test declaration, analyses the set of fixtures the test needs and prepares those fixtures specifically for the test.
Here is a list of the pre-defined fixtures that you are likely to use most of the time:
|Fixture |Type |Description |
|:----------|:----------------|:--------------------------------|
|page |[Page] |Isolated page for this test run. |
|context |[BrowserContext] |Isolated context for this test run. The `page` fixture belongs to this context as well. Learn how to [configure context](#modify-options) below. |
|browser |[Browser] |Browsers are shared across tests to optimize resources. Learn how to [configure browser](#modify-options) below. |
|browserName|[string] |The name of the browser currently running the test. Either `chromium`, `firefox` or `webkit`.|
## Test and assertion features
If you are familiar with test runners like Jest, Mocha and Ava, you will find the Playwright Test syntax familiar. These are the basic things you can do with the test:
### Focus a test
You can focus some tests. When there are focused tests, only they run.
```js
test.only('focus this test', async ({ page }) => {
// Run only focused tests in the entire project.
});
```
### Skip a test
You can skip certain test based on the condition.
```js
test('skip this test', async ({ page, browserName }) => {
test.skip(browserName === 'firefox', 'Still working on it');
});
```
### Group tests
You can group tests to give them a logical name or to scope before/after hooks to the group.
```js
import { test, expect } from 'playwright/test';
test.describe('two tests', () => {
test.only('one', async ({ page }) => {
// ...
});
test.skip('two', async ({ page }) => {
// ...
});
});
```
### Use test hooks
You can use `test.beforeAll` and `test.afterAll` hooks to set up and tear down resources shared between tests.
And you can use `test.beforeEach` and `test.afterEach` hooks to set up and tear down resources for each test individually.
```js
import { test, expect } from 'playwright/test';
test.describe('feature foo', () => {
test.beforeEach(async ({ page }) => {
// Go to the starting url before each test.
await page.goto('https://my.start.url');
});
test('my test', async ({ page }) => {
// Assertions use the expect API.
expect(page.url()).toBe('https://my.start.url');
});
});
```
## Write a configuration file
So far, we've looked at the zero-config operation of Playwright Test. For a real world application, it is likely that you would want to use a config.
Create `pwtest.config.ts` to configure your tests. You can specify browser launch options, run tests in multiple browsers and much more with the config. Here is an example configuration that runs every test in Chromium, Firefox and WebKit.
```js
import { PlaywrightTestConfig } from 'playwright/test';
const config: PlaywrightTestConfig = {
timeout: 30000, // Each test is given 30 seconds.
// A project per browser, each running all the tests.
projects: [
{
name: 'chromium',
use: {
browserName: 'chromium',
headless: true,
viewport: { width: 1280, height: 720 },
},
},
{
name: 'webkit',
use: {
browserName: 'webkit',
headless: true,
viewport: { width: 1280, height: 720 },
},
},
{
name: 'firefox',
use: {
browserName: 'firefox',
headless: true,
viewport: { width: 1280, height: 720 },
},
}
],
};
export default config;
```
Configure NPM script to use config.
```json
{
"scripts": {
"test": "npx pwtest -c config.ts"
}
}
```

28
docs/src/test-parallel.md Normal file
View File

@ -0,0 +1,28 @@
---
id: test-parallel
title: "Parallelism and sharding"
---
Playwright Test runs tests in parallel by default, using multiple worker processes.
<!-- TOC -->
<br/>
## Workers
Each worker process creates a new environment to run tests. Different projects always run in different workers. By default, runner reuses the worker as much as it can to make testing faster, but it will create a new worker when retrying tests, after any test failure, to initialize a new environment, or just to speed up test execution if the worker limit is not reached.
The maximum number of worker processes is controlled via [command line](#command-line) or [configuration object](#configuration-object).
Each worker process is assigned a unique sequential index that is accessible through [`workerInfo`](#workerinfo) object.
## Shards
Playwright Test can shard a test suite, so that it can be executed on multiple machines. For that, pass `--shard=x/y` to the command line. For example, to split the suite into three shards, each running one third of the tests:
```sh
npx playwright test --shard=1/3
npx playwright test --shard=2/3
npx playwright test --shard=3/3
```

154
docs/src/test-reporters.md Normal file
View File

@ -0,0 +1,154 @@
---
id: test-reporters
title: "Reporters"
---
<!-- TOC -->
<br/>
## Using reporters
Playwright Test comes with a few built-in reporters for different needs and ability to provide custom reporters. The easiest way to try out built-in reporters is to pass `--reporter` [command line option](./cli.md).
```sh
npx playwright test --reporter=line
```
For more control, you can specify reporters programmatically in the [configuration file](#writing-a-configuration-file).
```ts
// pwtest.config.ts
import { PlaywrightTestConfig } from 'playwright/test';
const config: PlaywrightTestConfig = {
reporter: 'dot',
};
// More complex example:
const config2: PlaywrightTestConfig = {
reporter: !process.env.CI
// A long list of tests for the terminal.
? 'list'
// Entirely different config on CI.
// Use very concise "dot" reporter plus a comprehensive json report.
: ['dot', { name: 'json', outputFile: 'test-results.json' }],
};
export default config;
```
## Built-in reporters
All built-in reporters show detailed information about failures, and mostly differ in verbosity for successful runs.
### List reporter
List reporter is default. It prints a line for each test being run. Use it with `--reporter=list` or `reporter: 'list'`.
```ts
// pwtest.config.ts
const config = {
reporter: 'list',
};
export default config;
```
Here is an example output in the middle of a test run. Failures will be listed at the end.
```sh
npx playwright test --reporter=list
Running 124 tests using 6 workers
✓ should access error in env (438ms)
✓ handle long test names (515ms)
x 1) render expected (691ms)
✓ should timeout (932ms)
should repeat each:
✓ should respect enclosing .gitignore (569ms)
should teardown env after timeout:
should respect excluded tests:
✓ should handle env beforeEach error (638ms)
should respect enclosing .gitignore:
```
### Line reporter
Line reporter is more concise than the list reporter. It uses a single line to report last finished test, and prints failures when they occur. Line reporter is useful for large test suites where it shows the progress but does not spam the output by listing all the tests. Use it with `--reporter=line` or `reporter: 'line'`.
```ts
// pwtest.config.ts
const config = {
reporter: 'line',
};
export default config;
```
Here is an example output in the middle of a test run. Failures are reported inline.
```sh
npx playwright test --reporter=line
Running 124 tests using 6 workers
1) dot-reporter.spec.ts:20:1 render expected ===================================================
Error: expect(received).toBe(expected) // Object.is equality
Expected: 1
Received: 0
[23/124] gitignore.spec.ts - should respect nested .gitignore
```
### Dot reporter
Dot reporter is very concise - it only produces a single character per successful test run. It is useful on CI where you don't want a lot of output. Use it with `--reporter=dot` or `reporter: 'dot'`.
```ts
// pwtest.config.ts
const config = {
reporter: 'dot',
};
export default config;
```
Here is an example output in the middle of a test run. Failures will be listed at the end.
```sh
npx playwright test --reporter=dot
Running 124 tests using 6 workers
······F·············································
```
### JSON reporter
JSON reporter produces an object with all information about the test run. It is usually used together with some terminal reporter like `dot` or `line`.
Most likely you want to write the JSON to a file. When running with `--reporter=json`, use `FOLIO_JSON_OUTPUT_NAME` environment variable:
```sh
FOLIO_JSON_OUTPUT_NAME=results.json npx playwright test --reporter=json,dot
```
In configuration file, pass options directly:
```ts
// pwtest.config.ts
const config = {
reporter: { name: 'json', outputFile: 'results.json' },
};
export default config;
```
### JUnit reporter
JUnit reporter produces a JUnit-style xml report. It is usually used together with some terminal reporter like `dot` or `line`.
Most likely you want to write the report to an xml file. When running with `--reporter=junit`, use `FOLIO_JUNIT_OUTPUT_NAME` environment variable:
```sh
FOLIO_JUNIT_OUTPUT_NAME=results.xml npx playwright test --reporter=junit,line
```
In configuration file, pass options directly:
```ts
// pwtest.config.ts
const config = {
reporter: { name: 'junit', outputFile: 'results.xml' },
};
export default config;
```

19
docs/src/test-retries.md Normal file
View File

@ -0,0 +1,19 @@
---
id: test-retries
title: "Test retry"
---
Playwright Test will retry tests if they failed. Pass the maximum number of retries when running the tests, or set them in the [configuration file](./test-configuration.md).
```sh
npx playwright test --retries=3
```
Failing tests will be retried multiple times until they pass, or until the maximum number of retries is reached. Playwright Test will report all tests that failed at least once:
```sh
Running 1 test using 1 worker
××±
1 flaky
1) my.test.js:1:1
```

View File

@ -1,181 +0,0 @@
---
id: test-runner-configuration
title: "Configuration"
---
<!-- TOC -->
## Modify options
You can modify browser launch options, context creation options and testing options either globally in the configuration file, or locally in the test file.
Playwright test runner is based on the [Folio] framework, so it supports any configuration available in Folio, and adds a lot of Playwright-specific options.
### Globally in the configuration file
You can specify different options for each browser using projects in the configuration file. Below is an example that changes some global testing options, and Chromium browser configuration.
```js
// config.ts
import { PlaywrightTestConfig } from "@playwright/test";
const config: PlaywrightTestConfig = {
// Each test is given 90 seconds.
timeout: 90000,
// Failing tests will be retried at most two times.
retries: 2,
projects: [
{
name: 'chromium',
use: {
browserName: 'chromium',
// Launch options
headless: false,
slowMo: 50,
// Context options
viewport: { width: 800, height: 600 },
ignoreHTTPSErrors: true,
// Testing options
video: 'retain-on-failure',
},
},
],
};
export default config;
```
### Locally in the test file
With `test.use()` you can override some options for a file, or a `describe` block.
```js
// my.spec.ts
import { test, expect } from "@playwright/test";
// Run tests in this file with portrait-like viewport.
test.use({ viewport: { width: 600, height: 900 } });
test('my test', async ({ page }) => {
// Test code goes here.
});
```
### Available options
See the full list of launch options in [`browserType.launch()`](https://playwright.dev/docs/api/class-browsertype#browsertypelaunchoptions) documentation.
See the full list of context options in [`browser.newContext()`](https://playwright.dev/docs/api/class-browser#browsernewcontextoptions) documentation.
Available testing options:
- `screenshot: 'off' | 'on' | 'only-on-failure'` - Whether to capture a screenshot after each test, off by default.
- `off` - Do not capture screenshots.
- `on` - Capture screenshot after each test.
- `only-on-failure` - Capture screenshot after each test failure.
- `video: 'off' | 'on' | 'retain-on-failure' | 'retry-with-video'` - Whether to record video for each test, off by default.
- `off` - Do not record video.
- `on` - Record video for each test.
- `retain-on-failure` - Record video for each test, but remove all videos from successful test runs.
- `retry-with-video` - Record video only when retrying a test.
Most notable testing options from [Folio documentation][folio]:
- `reporter: 'dot' | 'line' | 'list'` - Choose a reporter: minimalist `dot`, concise `line` or detailed `list`. See [Folio reporters][folio-reporters] for more details.
- `retries: number` - Each failing test will be retried up to the certain number of times.
- `testDir: string` - Directory where test runner should search for test files.
- `timeout: number` - Timeout in milliseconds for each test.
- `workers: number` - The maximum number of worker processes to run in parallel.
## Skip tests with annotations
The Playwright test runner can annotate tests to skip under certain parameters. This is enabled by [Folio annotations][folio-annotations].
```js
test("should be skipped on firefox", async ({ page, browserName }) => {
test.skip(browserName === "firefox", "optional description for the skip");
// Test function
});
```
## Run tests in parallel
Tests are run in parallel by default, using multiple worker processes. You can control the parallelism with the `workers` option in the configuration file or from the command line.
```sh
# Run just a single test at a time - no parallelization
npx folio --workers=1
# Run up to 10 tests in parallel
npx folio --workers=10
```
```js
// config.ts
import { PlaywrightTestConfig } from "@playwright/test";
const config: PlaywrightTestConfig = {
// No parallelization on CI, default value locally.
worker: process.env.CI ? 1 : undefined,
projects: [
// Your projects go here
],
};
export default config;
```
By default, test runner chooses the number of workers based on available CPUs.
## Reporters
Playwright test runner comes with a few built-in reporters for different needs and ability to provide custom reporters. The easiest way to try out built-in reporters is to pass `--reporter` [command line option](#command-line). Built-in terminal reporters are minimalist `dot`, concise `line` and detailed `list`.
```sh
npx folio --reporter=line
npx folio --reporter=dot
npx folio --reporter=list
```
Alternatively, you can specify the reporter in the configuration file.
```js
// config.ts
import { PlaywrightTestConfig } from "@playwright/test";
const config: PlaywrightTestConfig = {
// Concise 'dot' on CI, more interactive 'list' when running locally
reporter: process.env.CI ? 'dot' : 'line',
projects: [
// Your projects go here
],
};
export default config;
```
### Export JUnit or JSON report
The Playwright test runner includes reporters that produce a JUnit compatible XML file or a JSON file with test results.
```js
// config.ts
import { PlaywrightTestConfig } from "@playwright/test";
const config: PlaywrightTestConfig = {
reporter: [
// Live output to the terminal
'list',
// JUnit compatible xml report
{ name: 'junit', outputFile: 'report.xml' },
// JSON file with test results
{ name: 'json', outputFile: 'report.json' },
]
projects: [
// Your projects go here
],
};
export default config;
```
[folio]: https://github.com/microsoft/folio
[folio-annotations]: https://github.com/microsoft/folio#annotations
[folio-cli]: https://github.com/microsoft/folio#command-line
[folio-reporters]: https://github.com/microsoft/folio#reporters

View File

@ -1,196 +0,0 @@
---
id: test-runner-intro
title: "Playwright Tests"
---
Playwright Test Runner was created specifically to accommodate the needs of the end-to-end testing. It does everything you would expect from the regular test runner, and more. Playwright test allows to:
- Run tests across all browsers.
- Execute tests in parallel.
- Enjoy context isolation out of the box.
- Capture videos, screenshots and other artifacts on failure.
- Integrate your POMs as extensible fixtures.
There are many more exciting features, so read on!
<!-- TOC -->
## Installation
```sh
npm i -D @playwright/test@1.0.0-alpha
```
## First test
Create `tests/foo.spec.ts` to define your test.
```js
import { test, expect } from '@playwright/test';
test('is a basic test with the page', async ({ page }) => {
await page.goto('https://playwright.dev/');
const name = await page.innerText('.navbar__title');
expect(name).toBe('Playwright');
});
```
Now run your tests:
```sh
# Assuming that test files are in the tests directory.
npx folio -c tests
```
## Test fixtures
You noticed an argument `{ page }` that the test above has access to:
```js
test('basic test', async ({ page }) => {
...
```
We call these arguments `fixtures`. Playwright Test comes loaded with those fixtures, and you can add your own fixtures as well. Here is a list of the pre-defined fixtures that you are likely to use most of the time:
- `page`: [Page] - Isolated page for this test run.
- `context`: [BrowserContext] - Isolated context for this test run. The `page` fixture belongs to this context as well. Learn how to [configure context](#modify-options) below.
- `browser`: [Browser] - Browsers are shared across tests to optimize resources. Learn how to [configure browser](#modify-options) below.
- `browserName` - The name of the browser currently running the test. Either `chromium`, `firefox` or `webkit`.
## Test and assertion features
### Focus or skip tests
```js
import { test, expect } from '@playwright/test';
// You can focus single test.
test.only('focus this test', async ({ page }) => {
// Only this test in the entire project runs.
});
// You can skip tests.
test.skip('skip this test', async ({ page }) => {
});
```
### Group tests together
```js
import { test, expect } from '@playwright/test';
test.describe('two tests', () => {
test.only('one', async ({ page }) => {
// ...
});
test.skip('two', async ({ page }) => {
// ...
});
});
```
### Use test hooks
You can use `test.beforeAll` and `test.afterAll` hooks to set up and tear down resources shared between tests.
And you can use `test.beforeEach` and `test.afterEach` hooks to set up and tear down resources for each test individually.
```js
import { test, expect } from '@playwright/test';
test.describe('feature foo', () => {
test.beforeEach(async ({ page }) => {
// Go to the starting url before each test.
await page.goto('https://my.start.url');
});
test('my test', async ({ page }) => {
// Assertions use the expect API.
expect(page.url()).toBe('https://my.start.url');
});
});
```
## Write a configuration file
Create `config.ts` to configure your tests: specify browser launch options, run tests in multiple browsers and much more. Here is an example configuration that runs every test in Chromium, Firefox and WebKit.
```js
import { PlaywrightTestConfig } from '@playwright/test';
const config: PlaywrightTestConfig = {
timeout: 30000, // Each test is given 30 seconds.
// A project per browser, each running all the tests.
projects: [
{
name: 'chromium',
use: {
browserName: 'chromium',
headless: true,
viewport: { width: 1280, height: 720 },
},
},
{
name: 'webkit',
use: {
browserName: 'webkit',
headless: true,
viewport: { width: 1280, height: 720 },
},
},
{
name: 'firefox',
use: {
browserName: 'firefox',
headless: true,
viewport: { width: 1280, height: 720 },
},
}
],
};
export default config;
```
## Run the test suite
Tests can be run in single or multiple browsers, in parallel or sequentially.
```sh
# Run all tests across Chromium, Firefox and WebKit
npx folio --config=config.ts
# Run tests on a single browser
npx folio --config=config.ts --project=chromium
# Run tests sequentially
npx folio --config=config.ts --workers=1
# Retry failing tests
npx folio --config=config.ts --retries=2
# See all options
npx folio --help
```
Refer to the [command line documentation][folio-cli] for all options.
### Configure NPM scripts
Save the run command as an NPM script.
```json
{
"scripts": {
"test": "npx folio --config=config.ts"
}
}
```
[folio]: https://github.com/microsoft/folio
[folio-annotations]: https://github.com/microsoft/folio#annotations
[folio-cli]: https://github.com/microsoft/folio#command-line
[folio-reporters]: https://github.com/microsoft/folio#reporters

View File

@ -7,9 +7,9 @@ With a few lines of code, you can hook up Playwright to your existing JavaScript
<!-- TOC -->
## @playwright/test
## playwright/test
[@playwright/test](./test-runner-intro.md) is our first-party recommended test runner to be used with Playwright. Learn more about it [here](./test-runner-intro.md).
[playwright/test](./test-intro.md) is our first-party recommended test runner to be used with Playwright. Learn more about it [here](./test-intro.md).
## Jest / Jasmine

View File

@ -0,0 +1,18 @@
---
id: test-snapshots
title: "Snapshots"
---
Playwright Test includes the ability to produce and compare snapshots. For that, use `expect(value).toMatchSnapshot()`. Test runner auto-detects the content type, and includes built-in matchers for text, png and jpeg images, and arbitrary binary data.
```ts
// example.spec.ts
import { test } from 'playwright/test';
test('my test', async () => {
const image = await produceSomePNG();
test.expect(image).toMatchSnapshot('optional-snapshot-name.png');
});
```
Snapshots are stored under `__snapshots__` directory by default, and can be specified in the [configuration object](#configuration-object).