Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,14 @@ The document navigation timestamps (in addition to those from [Resource Timing](
8. {{domxref("PerformanceNavigationTiming.loadEventStart","loadEventStart")}}: timestamp immediately before the current document's [`load`](/en-US/docs/Web/API/Window/load_event) event handler starts.
9. {{domxref("PerformanceNavigationTiming.loadEventEnd","loadEventEnd")}}: timestamp immediately after the current document's [`load`](/en-US/docs/Web/API/Window/load_event) event handler completes.

## Performance timing confidence

The {{domxref("PerformanceNavigationTiming.confidence")}} property returns a {{domxref("PerformanceTimingConfidence")}} object containing information that indicates whether a performance record reflects typical application performance, or is likely affected by external factors.

For example, if a website has loaded after a browser "cold start" or session restore, its pages may load more slowly as a result. In such cases, a `low` confidence {{domxref("PerformanceTimingConfidence.value", "value")}} would be returned for an associated performance record. On the other hand, if the browser determines a returned performance record to be representative of typical application performance, a `high` confidence value is returned.

This confidence measure is useful for developers when trying to determine whether a performance issue is a legitimate concern, or an outlier being caused by external factors. See {{domxref("PerformanceTimingConfidence")}} for more information.

## Other properties

The {{domxref("PerformanceNavigationTiming")}} interface provides additional properties such as {{domxref("PerformanceNavigationTiming.redirectCount","redirectCount")}} returning the number of redirects and {{domxref("PerformanceNavigationTiming.type","type")}} indicating the type of navigation.
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
title: "PerformanceNavigationTiming: confidence property"
short-title: confidence
slug: Web/API/PerformanceNavigationTiming/confidence
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.PerformanceNavigationTiming.confidence
---

{{APIRef("Performance API")}}{{SeeCompatTable}}

The **`confidence`** read-only property of the {{domxref("PerformanceNavigationTiming")}} interface returns a {{domxref("PerformanceTimingConfidence")}} object containing information that indicates whether a performance record reflects typical application performance, or is likely affected by external factors.

For example, if a website has loaded after a browser "cold start" or session restore, its pages may load more slowly as a result. In such cases, a `low` confidence {{domxref("PerformanceTimingConfidence.value", "value")}} would be returned for an associated performance record. On the other hand, if the browser determines a returned performance record to be representative of typical application performance, a `high` confidence value is returned.

This confidence measure is useful for developers when trying to determine whether a performance issue is a legitimate concern, or an outlier being caused by external factors.

## Value

A {{domxref("PerformanceTimingConfidence")}} object.

## Examples

### Basic usage

This example uses a {{domxref("PerformanceObserver")}} to retrieve confidence data from observed {{domxref("PerformanceNavigationTiming")}} entries. The {{domxref("PerformanceTimingConfidence.value", "value")}} property is an enumerated value of `low` or `high`, indicating a broad confidence measure, whereas the {{domxref("PerformanceTimingConfidence.randomizedTriggerRate", "randomizedTriggerRate")}} property is a number inside the interval `0` to `1` inclusive, representing a percentage value that indicates how often noise is applied when exposing the `value`.

```js
const observer = new PerformanceObserver((list) => {
list.getEntries().forEach((entry) => {
console.log(
`${entry.name} confidence: ${entry.confidence.value}`,
`Trigger rate: ${entry.confidence.randomizedTriggerRate}`,
);
});
});

observer.observe({ type: "navigation", buffered: true });
```

## Specifications

{{Specifications}}

## Browser compatibility

{{Compat}}

## See also

- {{domxref("PerformanceTimingConfidence")}}
2 changes: 2 additions & 0 deletions files/en-us/web/api/performancenavigationtiming/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,8 @@ The interface also supports the following properties:

- {{domxref('PerformanceNavigationTiming.activationStart')}} {{ReadOnlyInline}} {{experimental_inline}}
- : A {{domxref("DOMHighResTimeStamp")}} representing the time between when a document starts prerendering and when it is activated.
- {{domxref('PerformanceNavigationTiming.confidence')}} {{ReadOnlyInline}} {{experimental_inline}}
- : A {{domxref("PerformanceTimingConfidence")}} object containing information that indicates whether a performance record reflects typical application performance, or is likely affected by external factors.
- {{domxref('PerformanceNavigationTiming.criticalCHRestart')}} {{ReadOnlyInline}} {{experimental_inline}}
- : A {{domxref("DOMHighResTimeStamp")}} representing the time at which the connection restart occurred due to {{HTTPHeader("Critical-CH")}} HTTP response header mismatch.
- {{domxref('PerformanceNavigationTiming.domComplete')}} {{ReadOnlyInline}}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,11 @@ This would log a JSON object like so:
"loadEventEnd": 227.60000002384186,
"type": "navigate",
"redirectCount": 1,
"activationStart": 0
"activationStart": 0,
"confidence": {
"randomizedTriggerRate": 0.4994798,
"value": "high"
}
}
```

Expand Down
112 changes: 112 additions & 0 deletions files/en-us/web/api/performancetimingconfidence/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
---
title: PerformanceTimingConfidence
slug: Web/API/PerformanceTimingConfidence
page-type: web-api-interface
browser-compat: api.PerformanceTimingConfidence
---

{{APIRef("Performance API")}}

The **`PerformanceTimingConfidence`** interface provides access to information that indicates whether a performance record reflects typical application performance, or is likely affected by external factors.

The `PerformanceTimingConfidence` object for each navigation timing entry is accessed via the {{domxref("PerformanceNavigationTiming")}} interface's {{domxref("PerformanceNavigationTiming.confidence", "confidence")}} property.

## Instance properties

- {{domxref("PerformanceTimingConfidence.randomizedTriggerRate")}} {{ReadOnlyInline}}
- : A number indicating how often noise is applied when exposing the `value`.
- {{domxref("PerformanceTimingConfidence.value")}} {{ReadOnlyInline}}
- : An enumerated value indicating a broad confidence measure of whether a performance record reflects typical application performance, or is likely affected by external factors.

## Instance methods

- {{domxref("PerformanceTimingConfidence.toJSON()")}}
- : Returns a JSON representation of the `PerformanceTimingConfidence` object.

## Description

If a website has loaded after a browser "cold start" or session restore, its pages may load more slowly as a result. In such cases, a `low` confidence {{domxref("PerformanceTimingConfidence.value", "value")}} is returned for an associated performance record. On the other hand, if the browser determines a returned performance record to be representative of typical application performance, a `high` confidence value is returned.

> [!NOTE]
> Device factors such as CPU do not contribute to the performance assessment. Other factors than browser "cold start" and session restore may be taken into account in future updates.

This confidence measure is useful for developers when trying to determine whether a performance issue is a legitimate concern, or an outlier being caused by external factors. There is often a significant difference between real-world dashboard metrics and performance observations in page profiling tools.

### Interpreting confidence data

Since the {{domxref("PerformanceTimingConfidence.randomizedTriggerRate", "randomizedTriggerRate")}} can vary across records, per-record weighting is needed to recover unbiased aggregates, to improve consistency of data, cut down on compound errors, and generally produce more realistic and reliable results. The procedures below illustrate how weighting based on {{domxref("PerformanceTimingConfidence.value", "value")}} can be applied before computing summary statistics based on the confidence data.

Once you have debiased the data and computed realistic summary statistics, you can focus on measuring and improving performance for issues under your control.
Comment on lines +35 to +39
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chrisdavidmills Same problem as I highlighted before - I didn't understand the "point" from this text and what you would do when you have the debiased data.
I asked Claude if this paragraph was just marketing and apparently it isn't :-). Apparently the term "recover unbiased aggregates" means something :-)

The point that is a bit buried is that value is not deterministic. The browser uses randomization to assign "high" or "low". When p = 0.1 (say), it means 10% of the time the value you see was randomly assigned regardless of actual conditions.

So you can't just filter out "low" records and average the "high" ones to work out your real performance — you'd be throwing away records that were actually fine but happened to get a random "low", and keeping records that were actually bad but got a random "high".

The debiasing math corrects for the random noise so that your aggregate statistics (mean, p75, etc.) are statistically valid. This is what the paragraph above did not make clear to me. Perhaps I am dim.

Claude says that what I'd do with the data if collecting navigation timing data (e.g. for a real-user monitoring dashboard):

  1. Collect records via PerformanceObserver as normal.
  2. For each record, also grab entry.confidence.value and entry.confidence.randomizedTriggerRate.
  3. When computing your p75 LCP or mean load time, apply the weighting formulas instead of a plain average — this gives you separate, corrected metrics for "typical" loads vs. "degraded" loads.
  4. Use the "high" confidence mean/percentile as your "real" performance baseline, and use the "low" one to understand how bad things get in cold-start scenarios.

This last bit is what I meant by "what do you do with the data" - use it as a new baseline.

Does my problem now make sense?


#### Computing debiased means

To compute debiased means for both [`high` and `low` values](/en-US/docs/Web/API/PerformanceTimingConfidence/value#value):

1. For each record:
- Let `p` be the record's {{domxref("PerformanceTimingConfidence.randomizedTriggerRate", "randomizedTriggerRate")}}.
- Let `c` be the record's {{domxref("PerformanceTimingConfidence.value", "value")}}.
- Let `R` be `1` when `c` is `high`, otherwise `0`.
2. Compute per-record weight `w` based on `c`:
- For estimating the `high` mean: `w = (R - (p / 2)) / (1 - p)`.
- For estimating the `low` mean: `w = ((1 - R) - (p / 2)) / (1 - p)`.
> [!NOTE]
> `w` may be negative for some records; you should keep every record.
- Let `weighted_duration = duration * w` (see {{domxref("PerformanceEntry.duration", "duration")}}).
3. Let `total_weighted_duration` be the sum of the `weighted_duration` values across all records.
4. Let `sum_weights` be the sum of the `w` values across all records.
5. Let `debiased_mean = total_weighted_duration / sum_weights`, provided `sum_weights` is not near zero.

#### Computing debiased percentiles

To compute debiased percentiles for both `high` and `low`:

1. Follow the [computing debiased means](#computing_debiased_means) steps to compute a per-record weight `w`.
2. Let `sum_weights` be the sum of the `w` values across all records.
3. Let `sorted_records` be all records sorted by duration in ascending order.
4. For a desired percentile (0-100), compute `q = percentile / 100.0`.
5. Walk `sorted_records` and for each record:
- Compute cumulative weight `cw` per-record: `cw = sum_{i: duration_i <= duration_j} w_i`.
- Compute debiased cumulative distribution function per-record: `cdf = cw / sum_weights`.
6. Find the first index `idx` where `cdf >= q`.
- If `idx` is `0`, return `duration` for `sorted_records[0]`.
- If no such `idx` exists, return `duration` for `sorted_records[n]`.
7. Compute interpolation fraction:
- Let `lower_cdf` be `cdf` for `sorted_records[idx-1]`.
- Let `upper_cdf` be `cdf` for `sorted_records[idx]`.
- if `lower_cdf = upper_cdf`, return `duration` for `sorted_records[idx]`.
- Otherwise:
- Let `ifrac = (q - lower_cdf) / (upper_cdf - lower_cdf)`.
- Let `lower_duration` be `duration` for `sorted_records[idx-1]`.
- Let `upper_duration` be `duration` for `sorted_records[idx]`.
- Return `lower_duration + (upper_duration - lower_duration) * ifrac`.

## Examples

### Basic usage

This example uses a {{domxref("PerformanceObserver")}} to retrieve confidence data from observed {{domxref("PerformanceNavigationTiming")}} entries.

```js
const observer = new PerformanceObserver((list) => {
list.getEntries().forEach((entry) => {
console.log(
`${entry.name} confidence: ${entry.confidence.value}`,
`Trigger rate: ${entry.confidence.randomizedTriggerRate}`,
);
});
});

observer.observe({ type: "navigation", buffered: true });
```

## Specifications

{{Specifications}}

## Browser compatibility

{{Compat}}

## See also

- {{domxref("PerformanceNavigationTiming")}}
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
---
title: "PerformanceTimingConfidence: randomizedTriggerRate property"
short-title: randomizedTriggerRate
slug: Web/API/PerformanceTimingConfidence/randomizedTriggerRate
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.PerformanceTimingConfidence.randomizedTriggerRate
---

{{APIRef("Performance API")}}{{SeeCompatTable}}

The **`randomizedTriggerRate`** read-only property of the {{domxref("PerformanceTimingConfidence")}} interface indicates how often noise is applied when exposing the {{domxref("PerformanceTimingConfidence.value")}}.

Noise is added to the data to improve privacy (so that each data item is less easily identifiable). When noise is added, a `low` or `high` confidence `value` is returned with equal probability, rather than the true `value`, to obfuscate the results.

## Value

A number between `0` to `1`, inclusive, which represents a percentage value. A value of `0` is equivalent to `0%`, which means that noise is never added, while `1` is equivalent to `100%`, meaning that noise is always added.

## Examples

See the main {{domxref("PerformanceTimingConfidence")}} page for an example.

## Specifications

{{Specifications}}

## Browser compatibility

{{Compat}}

## See also

- {{domxref("PerformanceTimingConfidence")}}
65 changes: 65 additions & 0 deletions files/en-us/web/api/performancetimingconfidence/tojson/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
---
title: "PerformanceTimingConfidence: toJSON() method"
short-title: toJSON()
slug: Web/API/PerformanceTimingConfidence/toJSON
page-type: web-api-instance-method
browser-compat: api.PerformanceTimingConfidence.toJSON
---

{{APIRef("Performance API")}}

The **`toJSON()`** method of the {{domxref("PerformanceTimingConfidence")}} interface is a {{Glossary("Serialization","serializer")}} that returns a JSON representation of the {{domxref("PerformanceTimingConfidence")}} object.

## Syntax

```js-nolint
toJSON()
```

### Parameters

None.

### Return value

A {{jsxref("JSON")}} object that is the serialization of the {{domxref("PerformanceTimingConfidence")}} object.

## Examples

### Using the toJSON method

This example uses a {{domxref("PerformanceObserver")}} to retrieve a JSON serialization of the confidence data for observed {{domxref("PerformanceNavigationTiming")}} entries.

```js
const observer = new PerformanceObserver((list) => {
list.getEntries().forEach((entry) => {
console.log(entry.confidence.toJSON());
});
});

observer.observe({ type: "navigation", buffered: true });
```

This would log a JSON object like so:

```json
{
"randomizedTriggerRate": 0.4994798,
"value": "high"
}
```

To get a JSON string, you can use [`JSON.stringify(entry)`](/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify) directly; it will call `toJSON()` automatically.

## Specifications

{{Specifications}}

## Browser compatibility

{{Compat}}

## See also

- {{domxref("PerformanceTimingConfidence")}}
- {{jsxref("JSON")}}
33 changes: 33 additions & 0 deletions files/en-us/web/api/performancetimingconfidence/value/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
---
title: "PerformanceTimingConfidence: value property"
short-title: value
slug: Web/API/PerformanceTimingConfidence/value
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.PerformanceTimingConfidence.value
---

{{APIRef("Performance API")}}{{SeeCompatTable}}

The **`value`** read-only property of the {{domxref("PerformanceTimingConfidence")}} interface is a broad confidence measure of whether a performance record reflects typical application performance, or is likely affected by external factors.

## Value

An enumerated value of `low` or `high`, indicating low or high confidence, respectively.

## Examples

See the main {{domxref("PerformanceTimingConfidence")}} page for an example.

## Specifications

{{Specifications}}

## Browser compatibility

{{Compat}}

## See also

- {{domxref("PerformanceTimingConfidence")}}
Loading