This wraps happy eyeballs in two places, the place where we make the
JSON request to Chromium and the actual CDP WebSocket request.
It required changes inside our happy eyeballs implementation since the
[websocket library does not
set](https://github.com/websockets/ws/blob/master/lib/websocket.js#L714)
the `clientRequestOptions.hostname` field, it just sets the `host` field
where we then fall back to when its not set.
This test would pass before Node.js 18 and fail after Node.js 18 without
my changes.
Fixes https://github.com/microsoft/playwright/issues/20364
Right now arrays preview yields all array elements. In case
of a sparse array with a single element on index 10000000,
this results in a large string that OOM Node.js.
This patch changes pretty-printing. For example:
```ts
// Given this array
const a = [];
a[10] = 1;
// Before this patch, pretty printing will yield:
"[,,,,,,,,1]"
// With this patch, pretty printing yields:
"[empty x 9, 1]"
```
The new array pretty-printing is equal to what Chrome DevTools
do to render sparse arrays.
Fixes#20347
This prevents shared workers from stalling upon restart.
We receive `Inspector.targetCrashed` and
`Inspector.targetReloadedAfterCrash` events that assume
`Runtime.runIfWaitingForDebugger` from any attached client. It is easier
and more stable to just detach from shared workers, because we do not
inspect them.
For service workers, we should actually issue
`Runtime.runIfWaitingForDebugger` in such cases, because we attach to
them.
Fixes#18932.
Previously, we closed pages one by one before closing the browser when
shutting down the persistent context. This logic was introduced in
https://github.com/microsoft/playwright/pull/4040 to properly finish
video recordings in persistent context.
Such a process makes it unnecessary brittle to close the persistent
context. For example, Chromium headless is sometimes unable to close the
last persistent page for unknown reasons.
Instead, we can just stop video recordings manually and close the
browser right away.
Fixes#18229.
Instead of requiring all frames in the subtree to receive a particular
event, we rely on the browser's definition of load and DOMContentLoaded.
This changes logic in a few edge cases:
- Some browsers do not emit load event upon window.stop() at all.
- DOMContentLoaded does not wait for subframes, so they might not be
ready when passing `{ waitUntil: 'domcontentloaded' }`.
`networkidle` preserves the old logic.
Fixes#15474.
Notes:
* page-level requests that are also handled by a SW's fetch handler, should not be interceptable at the page-level
* `Network.requestWillBeSent` does not provide enough metadata for Playwright to fire the `request` event at that time, so it does it as soon as it gets to the end of the request lifecycle
Potential fixes to avoid startup races:
- Wait for "juggler listening" message.
- Make sure `transport.onclose` is called when connecting to the transport after the actual pipe closure.
- Do not resolve raw headers upon `loadingFinished`, since they may still come later in
`responseReceivedExtraInfo`.
- Introduce separate promises for `encodedBodySize`, `transferSize` and `responseHeadersSize`.
- Make sure we resolve each of them either with data available
from the browser, or a fallback calculation.
- Set raw response headers for redirects on WebKit.
- Do not stall on cached responses in Chromium, they have erroneously set `hasExtraInfo` flag.
- Use `transferSize` that is available in Firefox protocol.
fix(network): make allHeaders wait until all header are available
Before, calling `allHeaders()` from `page.on('request')` would yield
provisional headers instead.
With these changes:
- In Firefox, all headers are available immediately.
- In Chromium, all headers are available upon requestWillBeSentExtraInfo.
- In WebKit, all headers are available upon responseReceived.
- In all browsers, intercepted requests use "provisional" headers
as all headers, since there is no network stack to change the headers.
Drive-by: migrated Chromium to `hasExtraInfo` flags that simplifies
the logic quite a bit.