\(E\)-values#

Core reference:

\(E\) values are a way of quantifying evidence about a statistical hypothesis. They are closely related to \(P\)-values.

Definition. Let \((\Omega, \mathcal{A}, \mathbb{P})\) be a probability space, and let \(E\) be a random variable \(E: \Omega \rightarrow [0, \infty]\) such that \(\mathbb{E}_\mathbb{P}(E) := \int_{\mathcal{X}} E d\mathbb{P} \le 1\). (Note that \(E\) is allowed to take the value \(\infty\): this will correspond to the strongest possible evidence that the data do not come from \(\mathbb{P}\).) Then \(E\) is an e-variable for \(\mathbb{P}\).

Let \(\mathcal{P}\) be a collection of probability distributions on the measurable space \((\Omega, \mathcal{A})\), and let \(E\) be a random variable \(E: \Omega \rightarrow [0, \infty]\) such that for all \(\mathbb{P} \in \mathcal{P}\), \(\mathbb{E}_\mathbb{P}(E) := \int_{\mathcal{X}} E d\mathbb{P} \le 1\). Then \(E\) is an e-variable for \(\mathcal{P}\).

The set of all \(E\)-variables for a collection \(\mathcal{P}\) of probability distributions is \(\mathcal{E}(\mathcal{P})\).

The observed value of an \(E\)-variable is an \(E\)-value.

Definition. Let \((\Omega, \mathcal{A}, \mathbb{P})\) be a probability space, and let \(P\) be a random variable \(P: \Omega \rightarrow [0, 1]\) such that \(\forall p \in [0, 1]\), \(\mathbb{P}(P \le p) \le p\). Then \(P\) is a P-variable for \(\mathbb{P}\).

Let \(\mathcal{P}\) be a collection of probability distributions on the measurable space \((\Omega, \mathcal{A})\), and let \(P\) be a random variable \(P: \Omega \rightarrow [0, 1]\) such that for all \(\mathbb{P} \in \mathcal{P}\), \(\forall p \in [0, 1]\), \(\mathbb{P}(P \le p) \le p\). Then \(P\) is a P-variable for \(\mathcal{P}\).

The set of all \(P\)-variables for a collection \(\mathcal{P}\) of probability distributions is \(\mathcal{P}(\mathcal{P})\).

The observed value of a \(P\)-variable is a \(P\)-value.

\(P\) to \(E\) calibration function

Suppose \(f : [0, 1] \rightarrow [0, \infty]\) is a (\(p\)-to-\(e\)) calibrator if, for any probability space \((\Omega, \mathcal{A}, \mathbb{Q}) and any \)P\(-variable \)P \in \mathcal{P}_Q, f(P) ∈ EQ.

A calibrator \(f\) dominates a calibrator \(g\) if \(f \ge g\); \(f\) strictly dominates \(g\) if \(f \ge g\) and \(f \ne \)g$. A calibrator is admissible if it is not strictly dominated by any other calibrator.

The following proposition says that a calibrator is a nonnegative decreasing function integrating to at most 1 over the uniform probability measure. Proposition 2.2. A decreasing function f : [0, 1] → [0, ∞] is a calibrator if and only if R 1 0 f ≤ 1. It is admissible if and only if f is upper semicontinuous, f(0) = ∞, and R 1 0 f = 1.

\(E\) to \(P\) calibration function